Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 957 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Managed Guardian Service Documentation

Loading...

Overview

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

LEARN

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

WHAT'S NEW

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

TECHNICAL INFORMATION

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Quick Start - Indexer

Welcome to the "Getting Started" documentation page! Whether you're new to our platform or looking for a refresher, this guide will walk you through the essential steps to kickstart your journey.

The Global Indexer is a pivotal tool within the Guardian ecosystem on the Hedera Network, designed to optimize data search, retrieval, and management. It offers advanced search capabilities across all Guardian data, including policies and documents, while improving data storage and indexing for efficient analytical queries and duplicate checks. With access to a comprehensive dataset from all Guardian instances, the Indexer ensures thorough data retrieval. Its user-friendly interface simplifies navigation, and its integration with Hedera and IPFS enhances the handling of large datasets and complex queries, making it an essential component for efficient data management.

Before we begin, lets figure out what type of user you are:

Existing MGS Account Users Start Here

New Users Without an MGS Account Start Here

Important Concepts

About the Trust Chain

The Next Generation of Registry Systems

Welcome to the Managed Guardian Service (MGS)! We enable applications a way to mint emissions & carbon offset tokens without worrying about the complexities of managing the technology infrastructure.

Note: We are currently in the Beta phase. Documentation and usage are subject to change.

Overview

With regard to ecological markets, business leaders will find themselves in these four phases:

  • Creating Verified Supply

  • Establishing Demand

  • Buying & Selling

  • Offsetting

There are many rationales that can be applied here such as Greenhouse Gas Emission Profiles, Renewable Energy Credits, and Carbon Offsets. While emission allowances are subject to government regulation, a Carbon Offset, for example, is an intangible asset that is created in a process involving a project or program whose activity can be claimed to reduce or remove carbon as a result, that is independently verified and turned into a carbon offset. These offsets are minted, or issued, by an environmental registry that created the standard methodology or protocol used to create the verified carbon offset claim. The offset then represents the original owner’s property right claim to the carbon-related benefits. The asset owner(s) can then sell their credits directly to buyers, or at wholesale. The ultimate end-user has the right to claim the benefits and can retire the offset permanently – usually as part of a netting process, where the claimed CO2 benefits are subtracted from that end-users other Greenhouse Gas (GHG) emissions.

The process to create renewable energy or carbon offset claims that can be validated and verified to be turned into a product is called measurement, reporting, and verification (MRV) data. Today, this process of collecting the supporting data for these carbon offsets is heavily manual and prone to errors. The main factors driving these error-prone are:

  • Poor data quality

  • Lack of assurance

  • Potential double counting

  • Greenwashing

This is where the Guardian solution which leverages a Policy Workflow Engine (PWE), is a sensible approach to ameliorate the issue with the current processes. The dynamic PWE can mirror the standards and business requirements of regulatory bodies. In particular, the Guardian solution offers carbon markets the ability to operate in a fully auditable ecosystem by including:

  • W3C Decentralized Identifiers (DIDs): Decentralized Identifiers (DIDs) are a new type of globally unique identifier that are designed to enable individuals and organizations to generate their own identifiers using systems they trust.

  • W3C Verifiable Credentials (VCs): A digital document that consists of information related to identifying a subject, information related to the issuing authority, information related to the type of credential, information related to specific attributes of the subject, and evidence related to how the credential was derived.

  • W3C Verifiable Presentations (VPs): A collection of one or more VCs.

Types of Users

About Schemas

Watch this quick 2-minute video to learn about Schemas:

New User Without an MGS Account

Welcome to the "Getting Started" documentation page! Whether you're new to our platform or looking for a refresher, this guide will walk you through the essential steps to kickstart your journey.

Step 1: Access the Indexer Homepage
  • Open in your browser.

Overall lack of trust.

Public ledger technologies.

  • Policy workflow engines through fully configurable and human-readable “logic blocks” accessible through either a user interface or an application programming interface (API).

  • On the "Welcome to Indexer" page, click the "Log In Through MGS" button.
  • You will be redirected to the MGS Login Screen.

  • Step 2: Sign Up for an MGS Account

    At the MGS Login Screen:

    • Click the "Don’t Have an Account? Sign Up" link at the bottom of the page.

    Step 3: Review Terms and Conditions
    • Carefully read the Terms and Conditions.

    • Click "Accept" to proceed.

    Step 4: Fill Out the Request Form
    • Provide the following information:

      • Username.

      • Email address.

      • Password.

    • Click "Request Access" to submit the form.

    Step 5: Authenticate with Your New Up Tenant Admin Account

    At the MGS Login Screen:

    • Select the Admin tab:

    • Enter your newly created username and password.

    • Click "Log In" to access the Indexer.

    indexer.guardianservice.app

    Existing MGS Account Users

    Welcome to the "Getting Started" documentation page! Whether you're new to our platform or looking for a refresher, this guide will walk you through the essential steps to kickstart your journey.

    Step 1: Choose a Method to Access the Indexer
    • Option 1: Log in through the MGS Sidebar Menu:

      • If you are logged into your MGS account, locate the quick link to the Indexer at the bottom of the sidebar menu and click it.

      • You will be redirected to the Indexer application.

    • Option 2: Access the Indexer Homepage:

      • Open in your browser.

      • On the "Welcome to Indexer" page, click the "Log In Through MGS" button.

    Step 2: Authenticate Your Session
    • If You Are Already Logged Into MGS:

      • The Indexer will detect your existing MGS session and automatically authenticate you. You’ll be directed into the Indexer without needing to log in again.

    Step 3: Log in Using Your Credentials

    At the MGS Login Screen:

    • Select the appropriate tab:

      • "Admin": For Tenant Admin users.

    Custom MGS ChatGPT Assistant

    The Managed Guardian Service (MGS) Custom GPT is a specialized AI assistant designed to facilitate users in understanding and utilizing the Managed Guardian Service platform. This tool represents an advanced version of ChatGPT, customized specifically to cater to the needs of users engaging with MGS. It has been meticulously programmed with a comprehensive range of MGS-related documentation, policies, and operational guidelines.

    You can find it here: https://chat.openai.com/g/g-dTwGOkcw7-managed-guardian-service-assistant

    You can also find it in the Chat GPT Store

    The primary function of the MGS Custom GPT is to act as an interactive knowledge base. It helps users navigate the complexities of the MGS platform, which is pivotal for businesses involved in emissions reporting, carbon offset, and renewable energy credit creation. This AI assistant stands out due to its ability to quickly process and provide insights from a vast array of MGS documentation, which has been integrated into its system.

    Key Features of MGS Custom GPT

    1. Customized Assistance: Tailored specifically for the Managed Guardian Service, it provides focused and relevant information, making it a reliable source for MGS-related queries.

    2. Rich Knowledge Base: Equipped with extensive data from MGS documentation, it can answer a wide range of questions, from basic setup to complex operational procedures.

    3. Efficient Query Resolution: Designed to interpret and respond to user queries by referencing the integrated MGS documentation, ensuring accurate and up-to-date information.

    How to Use MGS Custom GPT:

    1. Ask Specific Questions: Users can inquire about specific aspects of MGS, such as setting up user profiles, managing tenants, or understanding policy implementations.

    2. Seek Clarifications: If there are aspects of the MGS documentation or operations that are unclear, the tool can provide detailed explanations.

    3. Explore Features: Users can explore different functionalities and features of the MGS platform through interactive questioning.

    The Managed Guardian Service Custom GPT serves as an invaluable asset for users, significantly enhancing their ability to effectively utilize the MGS platform. By providing instant access to a wealth of information and guidance, it empowers users to make the most out of the MGS services and capabilities.

    About MGS Vault

    Watch this quick video to learn about the Managed Guardian Service Vault

    If You Are Not Logged Into MGS:

    • You will be redirected to the MGS Login Screen.

    "As User": For Policy users.

  • Enter your username and password.

  • Click "Log In" to access the Indexer.

  • indexer.guardianservice.app

    User-Friendly Interface: Simplifies the user's experience with the MGS platform, offering step-by-step guidance and clarifications on various aspects of the service.

  • Support for Various MGS Aspects: Capable of assisting with vault setups, Hedera account integration, policy understanding, token operations, and trust chain features.

  • Troubleshooting and Support: The tool can offer troubleshooting advice and guide users through resolving common issues encountered on the MGS platform.

    Changelog

    Tenant Operations

    Tenant Admins

    Beta v10.1

    We are thrilled to bring you MGS Beta v10.1, aligned with open-source Guardian 3.0 and featuring key updates designed to enhance security, user experience, and accessibility.

    New Features

    Seamless Email-Based Login & Tenant Selection

    Logging in to MGS is now smoother than ever! We've removed the friction of manually entering Tenant IDs. Instead, users can now log in using just their email and password, with a tenant selection screen for those associated with multiple tenants. This streamlined approach eliminates confusion, improves accessibility, and enhances the overall user experience.


    Methodology Breakdown

    A set of regulations or instructions that specify how carbon offset projects are created, validated, confirmed, and tracked are referred to as policies in the Guardian. These regulations aid in ensuring that carbon offsets are legitimate, quantifiable, and capable of reducing or eliminating actual emissions.

    The Guardian platform offers a framework for developing and overseeing carbon offset projects in accordance with a number of widely accepted international norms, such as the Verified Carbon Standard (VCS) or the Gold Standard. For various carbon offset project types, such as renewable energy, energy efficiency, forestry, or agriculture, these standards provide specific requirements.

    The Guardian platform's policies are made to be flexible and adaptable to the particular requirements and objectives of each carbon offset project. They cover a variety of options and conditions, such as project parameters, baseline emissions, additionality standards, monitoring techniques, and reporting needs.

    Project developers can make sure that their carbon offset projects adhere to the highest standards of reliability and quality by establishing policies within the Guardian platform. Involved parties, investors, and buyers of carbon offsets who want to make sure that their investments contribute to actual and significant emissions reductions or removals can also receive transparency and accountability from them.

    Watch the videos in this youtube playlist to learn how to breakdown methodologies and create policies for the Guardian:

    Beta v5.1

    This minor upgrades brings the monthly guardian update into MGS

    New

    • Core Guardian Upgrade to v2.21

    For the full changelog and release notes on the open-source Guardian please visit:

    About Dry Run

    Watch this quick video to learn about Dry Run:

    About Retirement

    Watch this quick video to learn about Retirement Contracts

    Key Features of Managed Guardian Service (MGS)

    Managed Guardian Service (MGS) builds on the core capabilities of the , incorporating powerful cloud-driven enhancements to streamline and elevate your carbon market and environmental data management experience. Here’s a look at the key features that make MGS the ideal solution:

    Use the UI or APIs to create your own digital methodologies

    Policies are one of the most important concepts to understand in the Guardian. We recommend that you take a moment and watch the video about Policies in the "." The Policy Workflow Engine defines and validates all requirements for methodologies. We give you the option to create them using the UI and also APIs. Make sure to read our

    Beta v10.2

    We’re excited to introduce MGS Beta v10.2, bringing important enhancements and fixes to improve system stability, user experience, and performance.

    New Features & Enhancements

    Trustchain Stability Fix Resolved an issue where accessing the Trustchain triggered a 422 error if the associated policy lacked a mint block. This fix ensures Trustchain visibility remains consistent and error-free regardless of policy configurations.

    Improved Multi-Account Login Experience Users with multiple accounts tied to the same email can now easily select which account to log into during authentication. This streamlined selection process enhances usability across tenants and roles.

    Token Minting Performance Optimization Addressed major performance bottlenecks during token minting—especially for high-load policies like. This update significantly improves loading speed and prevents UI freezing during heavy operations.

    Beta v11

    We’re excited to introduce MGS Beta v11 — a release that reaffirms our commitment to uptime, enterprise integration, security, and Guardian innovation.

    New Features & Enhancements

    Infrastructure Resilience & Uptime Enhancements MGS Beta v11 introduces major upgrades to our core infrastructure — built to support seamless, zero-downtime deployments. This behind-the-scenes improvement ensures that updates, hotfixes, and feature rollouts happen without interrupting your operations. Whether you're streaming MRV data, minting tokens, or managing live policies, your work stays uninterrupted. These enhancements reinforce one of MGS’s core promises: maximum uptime, continuous performance, and uninterrupted trust.

    Azure B2C Single Sign-On Integration For enterprise teams building custom front ends, MGS now supports Azure Active Directory B2C (SSO) integration. Organizations can authenticate users through their own identity systems while seamlessly accessing MGS — no separate login required.

    Web3.Storage

    Overview

    With the recent shift in the Managed Guardian Service (MGS) infrastructure, incorporating web3.storage as a critical component for data storage and management is essential. This guide provides an introduction to web3.storage, outlining its significance, operational mechanics, and the steps required for Tenant Admins to integrate it with their MGS tenants.

    Chapter 17: (Reserved for completion)

    Complete guide to deploying, monitoring, and maintaining carbon certification policies in production environments

    Chapter 16 covered advanced policy patterns and testing. Chapter 17 focuses on the critical final phase: deploying policies to production, managing live carbon credit certification systems, handling upgrades, and ensuring ongoing operational excellence.

    This chapter addresses the real-world challenges of running production carbon registries with financial and environmental stakeholders depending on reliable, accurate policy execution.

    Production Deployment Architecture

    About Policies

    Watch this quick 2-minute video to learn about policies:

    Beta v3

    We're thrilled to announce MGS is upgrading from Beta v2 to the powerful Beta v3

    New

    • Hedera Network selection: Mainnet (new), Testnet, Previewnet

    • Core open-source Guardian Upgrade v2.8 to v2.11

    Integrating filebase with MGS Tenants

    Tenant Configuration

    Now, with a Secret Access Token generated, you can proceed with configuring a new or existing tenant in MGS. If you wish to create a new tenant, log in as Tenant Admin, select the ‘Tenants’ menu option, and click ‘+ Add New Tenant’. In the modal window, enter the Tenant Name, choose the appropriate Network from the list, and select ‘filebase’ among the IPFS Storage Provider options. In the ‘filebase token’ field, enter the Secret Access Token you copied earlier. Click the ‘Create Tenant’ button to finalize the creation of this tenant.

    If you need to change the IPFS Storage Provider for an existing tenant, select the ‘Tenants’ menu item, find the tenant, and click the ‘Open’ button. Then, go to the Settings tab, select ‘filebase’ from the IPFS Storage Provider list, and paste the Secret Access Token you copied earlier into the filebase token field. Click the Save Changes button at the bottom of the page to apply your changes.

    Beta v5

    This release includes updates to the IPFS storage providers available

    New

    • New Hedera Testnet reconfiguration after testnet reset

    • Web3.storage Validation

    User Invite Status Tracking A new UI panel within the Tenant dashboard now displays the real-time status of user invites. Admins can track if an invite was sent, accepted, or expired—with support for resending expired invites—making user onboarding more transparent and manageable.

    Direct Guardian-to-Indexer Connection (Enhanced Integration) Guardian instances can now be directly connected to the MGS hosted Indexer, enabling access to advanced UI elements and functionality previously limited to manual local setups. This integration is authenticated, streamlined, and ensures high availability with minimal user intervention.

    HashScan Integration for Hedera Links All system-generated links for Topics, Tokens, and Hedera Account IDs have been updated from LedgerWorks to HashScan. This resolves broken link issues and ensures consistent access to Hedera ledger data.


    Updates

    Guardian Upgrade to v3.1.1 MGS is now upgraded to align with open-source Guardian v3.1.1. This ensures continued compatibility and leverages the latest core improvements from the Guardian ecosystem.

    For the full changelog and release notes on the open-source Guardian v3.1.1 please visit: https://github.com/hashgraph/guardian/releases

    Two-Factor Authentication for SR and Policy Users Security is non-negotiable. MGS now enforces Two-Factor Authentication (2FA) for both Standard Registry and Policy User accounts. This ensures only verified users can access sensitive workflows and manage assets across tenants.

    Aligned with Guardian Open Source v3.2 MGS now supports the latest Guardian 3.2 release, bringing expanded interoperability, rich data visualizations, and admin-friendly controls. Highlights include:

    • Cross-instance policy access — enabling decentralized collaboration

    • geoJSON support for visualizing complex geographic data directly on maps

    • Manual re-indexing tools for targeted data syncs on-demand

    For the full changelog and release notes on the open-source Guardian v3.2 please visit: https://github.com/hashgraph/guardian/releases

    What is web3.storage?

    web3.storage is a platform designed to facilitate easy and efficient storage of data on the decentralized web. Utilizing IPFS (InterPlanetary File System) and Filecoin, web3.storage offers a robust and scalable solution for data storage needs, particularly suited for applications within the web3 ecosystem.

    Guardian Production Infrastructure

    Production carbon registries require robust infrastructure supporting high availability, data integrity, and regulatory compliance.

    [This chapter is in progress]

  • Self-custody of keys via MGA Vault

  • High Availability

  • Improved DB CPU consumption

  • More pre-loaded open-source policies

  • This will only cover what is new and improved with the Managed Guardian Service. For the full changelog and release notes on the open-source Guardian please visit: https://github.com/hashgraph/guardian/releases

  • New IPFS Storage Provider - MGS Managed IPFS Node

  • For the full changelog and release notes on the open-source Guardian please visit: https://github.com/hashgraph/guardian/releases

    https://github.com/hashgraph/guardian/releases

    Managed IPFS Node

    Overview

    In response to the evolving needs of our Managed Guardian Service (MGS) infrastructure, we are thrilled to introduce the Managed IPFS node as a pivotal addition to our suite of data storage solutions. This section is dedicated to providing a comprehensive understanding of the Managed IPFS node, highlighting its importance, operational dynamics, and integration process for Tenant Admins within the MGS ecosystem.

    What is a Managed IPFS Node?

    The Managed IPFS node is a fully managed and hosted service provided by MGS, designed to streamline the storage and management of data on the decentralized web. Leveraging the power of the InterPlanetary File System (IPFS), our Managed IPFS node offers a seamless and scalable approach to handling vast amounts of data with enhanced security, redundancy, and ease of access.

    filebase

    Overview

    In response to evolving data management needs within the Managed Guardian Service (MGS) infrastructure, integrating filebase as a key IPFS provider has become imperative. This documentation serves as a comprehensive guide to incorporating Filebase, emphasizing its importance, functionality, and the step-by-step process required for Tenant Admins to seamlessly integrate it with their MGS setup.

    What is filebase?

    filebase is a pioneering platform that leverages the InterPlanetary File System (IPFS) to offer scalable and decentralized data storage solutions. By harnessing the power of IPFS and blockchain technology, Filebase provides users with a secure, efficient, and cost-effective method for storing data across a distributed network. This platform is exceptionally well-suited for applications demanding high data integrity, availability, and redundancy — characteristics that align with the core objectives of the MGS ecosystem.

    The integration of filebase with MGS not only enhances the platform's data storage capabilities but also aligns with the overarching goal of leveraging decentralized technologies for improved security and efficiency. This guide will navigate through the necessary steps to integrate filebase with MGS, ensuring a smooth transition for Tenant Admins aiming to optimize their data management strategies within the MGS framework.

    Tenant APIs

    Can't create a policy? Use one that we have already preloaded for you!

    There's nothing worse than wanting to jump into the action, but not having all of the tools! The open-source Guardian community is ever growing and so is the collection of tested policies. As more become available, we'll add them to the list of preloaded policies for you to quickly drop them in.

    Multi-tenancy

    Multi-tenancy is an architecture in which a single instance of a software application serves multiple customers. Each environment is called a tenant. During the beta phase, we will allow up to 3 tenants per each Tenant Admin account. Additionally, you will also be able to select which Hedera network you'd like to point your tenant to the Hedera Mainnet, Testnet, and Previewnet. Let your imagination run wild on how you will use this feature. They can serve your customers, act as sandbox/production environments, or even offer different use case designs. We look forward to hearing everyone's feedback on this!

    Flexible Data Storage with IPFS Storage Providers

    The Managed Guardian Service (MGS) enhances its data storage capabilities through integration with various IPFS Storage Providers, ensuring that organizations and individuals have access to a decentralized and secure method for managing their digital assets and environmental data. This approach not only bolsters data integrity and accessibility but also aligns with the decentralized ethos of blockchain technologies, offering a robust solution for the storage of sensitive information across a distributed network.

    Integrate with your system

    The Managed Guardian Service is a hosted environment where we provide you with resources, tools, and support. Once registered for the Managed Guardian Service, users will be given two options to get started. One option is to use a simple user interface to develop policies and run proof of concepts quickly. The other option is APIs for a fully customizable application experience.

    Secure self-custody with the MGS Vault

    The Managed Guardian Service Vault is designed to benefit organizations and individuals looking to securely store their user account secrets, such as private keys. The Vault solution leverages the open-source version of Hashicorp Vault and is intended to be used with the Managed Guardian Service. Keep in mind, that MGS has on it's roadmap, integrations with many other popular vaults, so requests are welcome. Once registered for the Managed Guardian Service, users will need to configure their profiles. They may choose to bring their own compatible vault or use the MGS vault solution we deployed across all major cloud provider Marketplaces. Examples of those marketplaces include the Microsoft Azure Marketplace, Google Cloud Platform Marketplace, and AWS Marketplace.

    Hosted Indexer: Data Tracking and Retrieval

    The Indexer in Managed Guardian Service (MGS) enables tracking and retrieval of data across carbon offsets, policies, and transactions. It offers advanced search capabilities, allowing users to quickly locate specific records, such as policy updates or carbon credit histories, by filtering attributes like project type and issuance date. Designed to improve data transparency and accessibility, the Indexer supports compliance reporting, impact analysis, and audit trails, ensuring that all indexed data is up-to-date, traceable, and easily accessible for informed decision-making.

    Get access to our support desk

    The monitoring and alerting system is the backbone of our service. It allows us to detect any issues before they manifest themselves to users and enables us to take timely action. MGS is widely covered by monitoring and alerts to allow us to react, prevent, and analyze any issues that can happen. However, In the event that technical support is needed, the MGS team has a help desk with SLAs to address needs.

    Feel free to submit a ticket for any technical or non-technical needs at https://guardianservice.io/support/

    Want to get technical?

    Read our full technical blog to learn about all of the special cloud-based features we packed into the Managed Guardian Service here.

    open-source Guardian v3.0
    Important Concepts
    guides below for a deeper understanding.

    Quick Start - MGS

    Welcome to the "Getting Started" documentation page! Whether you're new to our platform or looking for a refresher, this guide will walk you through the essential steps to kickstart your journey.

    Step 1: Sign Up for Tenant Admin Account

    Navigate to the MGS website: https://guardianservice.app/

    Click on "Sign Up" and enter your username, email, password, and agree to the terms of use.

    Access will be automatically granted upon completion.

    Step 2: Admin Login and Tenant Configuration

    Log in with your Admin Email and Password.

    If its a new Admin account then you will be able to do the following:

    • Access the Tenant Admin screen; configure your subscription under the Subscription tab.

    • Navigate to the Tenants tab and click "Add New Tenant" for tenant configuration.

    Set configurations like Tenant Name, Network Selection (Testnet, Mainnet, or Previewnet) and IPFS Storage Provider (Managed IPFS Node, Filebase, Web3.Storage).

    Enter necessary API Key and API Proof values (refer to documentation for creation instructions) in case of Web3. Storage/Filebase.

    Step 4: Inviting Users and Customizing Tenant Branding

    Use the Users tab to invite new members to your tenant by entering their email address and assigning a role. Select Standard Registry for users who will be managing and publishing registry policies. Choose User for individuals who will interact with the policies published by the registry.

    Customize tenant branding with unique names, colors, logos, and background images.

    Adjust IPFS Service Providers and modify API keys and proofs as needed.

    Step 5: Setting Up a Standard Registry User Account

    The first user is typically a Standard Registry account. This user establishes methodology requirements and issues tokens.

    Vault Selection

    Follow the on-screen instructions to select a vault. The MGS Vault is designed for organizations or individuals seeking a secure, self-custody option for storing account secrets like private keys.

    ℹ️ Note: Vault selection is required for Mainnet, but may be skipped when working on Testnet.

    Refer to the and step-by-step setup guides.

    Hedera Account Credentials

    Step 6: Exploring Advanced Features

    In the side bar, navigate to the Policy tab and the Schemas section to create and manage schemas.

    Dive into features like Artifacts, Modules, Policies, Tools, and Tokens.

    Learn to create policies from scratch or import them using the Policies tab.

    Step 7: Testing Policies with Dry Run Mode

    Use Dry Run mode to test policies in a simulated environment. Create virtual users and interact with policies as real-world users would.

    Step 8: Publishing and Inviting Policy Users

    Publish your policies for interaction by Policy users.

    Invite policy users to your tenant to submit data and engage with published policies.

    Step 9: Setting Up Policy User Account

    Similar to steps 4 and 5, Policy users need to be invited, and will also need to follow the steps to finish setting up their user account. Policy users then engage with the specific Standard Registry and interact with policies.

    Step 10: Final Steps for Policy Users

    Explore the List of Tokens and Policies tabs to associate with tokens and access published policies.

    Use advanced search features for finding relevant policies for MRV activities.

    Step 11: Use the MGS Custom GPT Assistant

    Feel free to use the Managed Guardian Service custom GPT . The Managed Guardian Service (MGS) Custom GPT is a specialized AI assistant designed to facilitate users in understanding and utilizing the Managed Guardian Service platform. This tool represents an advanced version of ChatGPT, customized specifically to cater to the needs of users engaging with MGS. It has been meticulously programmed with a comprehensive range of MGS-related documentation, policies, and operational guidelines.

    Note: If you are currently using the open-source Guardian APIs, migrating to Managed Guardian Service is really easy!

    Simply change the API URL from what you are currently using (i.e. http://localhost:3002/api/v1/) to https://guardianservice.app/api/v1/

    Beta v10

    We are thrilled to bring you MGS Beta V10, aligned with open-source Guardian 3.0 and featuring key updates designed to enhance security, user experience, and data accessibility.

    New Features

    Database Vault Restriction for Mainnet API Access To ensure data security on the Mainnet, API access to the Hashicorp vault is now restricted, aligning API functionality with UI standards. This measure prevents unauthorized use of the vault, providing an added layer of protection for mainnet operations.

    Enhanced Tenant ID Accessibility Tenant ID visibility has been streamlined for improved user experience. Previously accessible only by email or Admin login, the Tenant ID is now displayed directly within the tenant dashboard, making it easier for users to locate this essential information.

    User Role Selection on Invite To improve the invitation flow, Tenant Admins can now select user roles (Standard Registry or User) when inviting new users. This update allows for better management of permissions and access levels, streamlining the onboarding process. Invitations are tailored to reflect the specific user role, enhancing clarity and usability.


    Indexer Enhancements

    Login Access for Indexer Interface We've added a dedicated Login/Signup button to the Indexer UI page, allowing users to directly access the login page without redirecting through the main MGS portal. This update simplifies access to Indexer functionality, making navigation smoother and more intuitive.

    Global Search Integration with Indexer The Indexer is now integrated with Global Search, enabling users to search policies across the entire Hedera Testnet and Mainnet from within MGS. This enhancement improves search capabilities, providing faster, more comprehensive access to policies across instances and standard registries.

    Customizable Indexer Hosting In this release, we’ve added options for hosting and customizing the Indexer. Organizations can now configure the Indexer to suit specific search and data access needs, making MGS even more flexible and adaptable to unique use cases.


    Updates

    MGS Upgrade to Open-Source Guardian 3.0 In alignment with the latest open-source Guardian version 3.0, MGS Beta V10 includes all the latest features and improvements from the Guardian platform. This update ensures that MGS users have access to cutting-edge functionality, security enhancements, and performance optimizations.

    For the full changelog and release notes on the open-source Guardian please visit:

    Beta v1

    We launched into production with the Managed Guardian Service Beta v1. It has all of the core features included in the open-source Guardian, but with some special cloud-driven features.

    New

    Below is a list of features that are included in the initial launch of the Managed Guardian Service Beta v1

    • Open-Source Guardian Version 2.7

    • Introduction to Admin Users

    • Multi-tenancy

    • Pre-loaded Policies

      • Carbon Offsets Policies:

        • Verra Redd+ VM0007 Developed by Envision Blockchain

        • Carbon Reduction Measurement - GHG Corporate Standard Developed by TYMLEZ

    • Downloadable APIs

    This will only cover what is new and improved with the Managed Guardian Service. For the full changelog and release notes on the open-source Guardian please visit:

    Integrating Managed IPFS Node with MGS Tenants

    Tenant Admins play a crucial role in integrating the Managed IPFS node with their MGS tenants. Here's a step-by-step guide to get you started:

    For New Tenants

    1. Click the Add New Tenant Button

    2. Fill out the Tenant Name and select the appropriate Network

    3. When asked for the IPFS Storage Provider click the drop down and select Managed IPFS node

    For Existing Tenants

    1. From the Tenant Admin screen, click the "Open" button

    1. Navigate to the "Settings" tab

    1. When asked for the IPFS Storage Provider click the drop down and select Managed IPFS node

    1. Save Changes

    Beta v2

    The Beta v2 Release includes new features such as improved tenant and user management, full asset lifecycles such as asset retirement, and much more.

    New

    Below is a list of improvements that are included in the Managed Guardian Service Beta v2

    • Upgrade to core open-source Guardian v2.8 (Retirement process for assets, Matched Assets, 3rd Party Content Providers, Modular Benefit Projects, and LedgerWorks Eco Explorer Implementation)

    • Adding 2 DOVU CRU methodologies to the preloaded policies (Agrecalc and Cool Farm)

    • Extend the POST/tenants/invite endpoint with the ability to return the inviteId in response

    • Updated Swagger Documentation for Beta V2

    • Enable Admins to manage users

    • Improved Internal Alerts

    • Enhanced Autoscaling for performance loading

    • Bug fixes

    This will only cover what is new and improved with the Managed Guardian Service. For the full changelog and release notes on the open-source Guardian please visit:

    Beta v9

    We are excited to introduce MGS Beta V9, packed with new features and improvements to enhance your experience and provide greater flexibility in managing your operations.

    New

    Email Alert on Successful Publishing of Policies

    To improve communication and ensure that users are promptly informed, we have added an email notification feature. Whenever there is a successful publishing of any methodology on Testnet/Mainnet, an email alert with detailed information will be sent to the user.

    Further Evolution of Policy Comparison (Mass Diff)

    We have extended our policy comparison functionality to allow for mass-comparison of policies across the entire ecosystem. Users can now search for local policies similar or different to a given policy based on a similarity threshold, without needing to import them into the Guardian instance. This feature enhances the efficiency and breadth of policy analysis.

    Updates

    UI Improvements

    We have made several enhancements to the user interface, including updates to dialog boxes, notification bars, and login screens. These improvements aim to provide a more cohesive and user-friendly experience.

    Obsolete Banner Display

    The obsolete banner, which should appear at the top of the page when launching MGS, is now functioning correctly and will be displayed as intended.

    Policies with Tools in Dry Run Mode: Performance Improvement

    The performance of executing policies with tools referenced in dry run mode has been significantly improved. Users will now experience faster execution speeds when importing and running these policies.

    For the full changelog and release notes on the open-source Guardian please visit:

    Compatible IPFS Storage Providers

    To ensure seamless integration and compatibility with a wide range of data storage needs within the Managed Guardian Service (MGS) framework, we have expanded our list of supported IPFS (InterPlanetary File System) storage providers. Each provider brings unique features and benefits tailored to different requirements, offering flexibility and choice to our users. Whether you are looking for enhanced security, specific geographic data residency, cost-efficiency, or scalability, our diverse range of compatible IPFS storage providers ensures that your data storage needs are met with the highest standards. Below is a list of IPFS storage providers that are fully compatible with MGS, designed to enhance your experience and optimize your data management strategy within the MGS ecosystem.

    MGS Managed IPFS Node

    Beta v3.2

    This is a patch update to Beta v3.2 will fix some known issues. Essentially, this enables easier token discoverability and smoother operations of large policies.

    New

    • Guardian Core patches

      • Fixing an issue that the TokenId created are published with UUID formatting instead of a tokenId property.

      • Improvement of how the Policy Service handles large policies.

    This will only cover what is new and improved with the Managed Guardian Service. For the full changelog and release notes on the open-source Guardian please visit:

    Return Tenants

    Return Tenants.

    POST /tenants

    Return Tenants. For Admin role only.

    {
    content:
                application/json:
                  schema:
                    type: object
                    properties:
                      totalCount:
                        type: number
                      tenants:
                        $ref
    
    {
        // Response
    }

    Beta v6

    We're excited to announce MGS Beta v6. This includes several new features and improvements designed to enhance your experience and provide more options and flexibility for managing your operations.

    New

    1. Filebase Support Added

    In our continuous effort to expand and improve the IPFS solutions available in MGS, we have now added Filebase as an additional option. This integration allows users to choose Filebase for their IPFS needs, alongside the existing options. With Filebase support, users can leverage its unique features and benefits as part of their workflow in MGS.

    Understanding the importance of effective communication, especially during downtimes, we have introduced a new notification system for all users. This feature is designed to inform users about any planned or unexpected downtime promptly. Here's what makes the downtime notification system stand out:

    • Location and Visibility: The notification is prominently displayed at the top of the screen when enabled, ensuring maximum visibility.

    • Interactivity: Users can dismiss the notification with a simple click of the [X] button. Once closed, it will not reappear until a new message is issued.

    3. Enhanced Final User Profile Setup Wizard Descriptions

    To make the setup process as smooth and understandable as possible, we have added helpful descriptions to each step of the Final Setup wizard. These descriptions are designed to provide users with clear information about what is required at each step, ensuring that both Standard Registry and Default User roles can be configured with ease and confidence.

    4. Update to Guardian v2.22

    Beta v6 includes Guardian version 2.22, bringing all the latest improvements and fixes from the Guardian platform into MGS.

    For the full changelog and release notes on the open-source Guardian please visit:

    IPFS Storage Providers

    Overview

    As part of the evolving Managed Guardian Service (MGS) platform, Tenant Admins now have the flexibility and autonomy to select their own IPFS (InterPlanetary File System) storage providers. They must select and configure their preferred IPFS storage provider prior to creating a tenant.

    This new feature significantly enhances the customization and control Tenant Admins have over their data storage solutions within the MGS ecosystem. This introduction aims to guide Tenant Admins through the process of selecting and integrating an IPFS storage provider with their MGS tenant.

    The Role of IPFS in MGS

    IPFS is a peer-to-peer network protocol that enables decentralized data storage and sharing. In the context of MGS, it serves as a backbone for storing digital environmental assets securely and efficiently. Choosing the right IPFS storage provider is crucial for optimizing data accessibility, redundancy, and overall system performance.

    Importance of Selecting an IPFS Provider

    1. Customized Data Storage Solutions: Tenant Admins can choose a provider that best fits their specific data storage needs and requirements.

    2. Enhanced Data Sovereignty: By selecting their own provider, Tenant Admins have greater control over where and how their data is stored.

    3. Scalability and Flexibility: Different providers offer varying levels of scalability and flexibility, allowing Tenant Admins to tailor their storage solutions as their needs evolve.

    Steps for Tenant Admins

    1. Research and Evaluate IPFS Providers: Understand the offerings, features, and pricing models of various IPFS storage providers. Key factors to consider include storage capacity, redundancy, security measures, and network performance.

    2. Compatibility with MGS: Ensure that the chosen IPFS provider is compatible with the MGS platform. This compatibility is essential for seamless integration and operation within the MGS ecosystem.

    3. Integration Process: Follow the specific steps provided to integrate the selected IPFS storage provider with your MGS tenant. This may involve configuring API connections, setting up access credentials, and customizing storage settings.

    Conclusion

    The ability to select their own IPFS storage providers empowers Tenant Admins with greater control and flexibility in managing their data storage solutions within the MGS platform. This feature aligns with the overarching goal of MGS to provide a customizable, secure, and efficient environment for managing digital environmental assets. Tenant Admins are encouraged to take advantage of this feature to optimize their MGS experience and meet their specific data storage needs.

    Setting up filebase

    Logging into filebase

    To start using filebase as an IPFS provider in MGS, you need to first register your account. If you already have an account, you can directly go to https://console.filebase.com. If you don’t, follow these steps:

    • To sign up for a filebase account, navigate to https://filebase.com. To create a new account, click the ‘Try for Free’ button in the top right corner of the webpage.

    • Next, fill out the form fields, including an email address and password, and agree to the filebase terms to create your account.

    • You will receive an email with confirmation instructions. Click the link included in the email to confirm your account and complete the registration process. Once finished, you can access the .

    Buckets

    Buckets are like file folders; they store data and associated metadata. Buckets are containers for objects. Navigate to the Buckets dashboard by clicking on the ‘Buckets’ menu option. Here you can view your existing buckets and create new ones.

    If you already have the Bucket you wish to use with MGS, skip this step. To create a new bucket, click the ‘Create Bucket’ button in the top right corner of the webpage, enter the name for the new bucket, and click the ‘Create Bucket’ button.

    If successful, you will be redirected to the Bucket dashboard with your newly created bucket.

    Access Keys

    The Access Keys menu option leads you to the access keys dashboard. Here you can view, manage, and rotate your access keys. From this menu, you can also generate a Secret Access Token to be used with MGS. To generate this token, click the dropdown menu for 'Choose Bucket to Generate Token', then select the IPFS filebase Bucket you intend to use.

    Copy the generated Secret Access Token.

    Integrating Web3.Storage with MGS Tenants

    Tenant Admins play a crucial role in integrating Web3.Storage with their MGS tenants. Here's a step-by-step guide to get you started:

    For New Tenants

    1. Click the Add New Tenant Button

    2. Fill out the Tenant Name and select the appropriate Network

    3. When asked for the IPFS Storage Provider click the drop down and select Web3.Storage

    1. Fill out the IPFS Storage API Key and IPFS Storage API Proof that you obtained from the .

    For Existing Tenants

    1. From the Tenant Admin screen, click the "Open" button

    1. Navigate to the "Settings" tab

    1. When asked for the IPFS Storage Provider click the drop down and select Web3.Storage.

    2. Fill out the settings and click "Save Changes."

    Beta v7

    We're thrilled to introduce MGS Beta v7, featuring significant updates and enhancements to optimize your experience and increase the flexibility for managing your operations.

    New

    1. Fix for IPFS Resolution Issue When using the MGS hosted IPFS storage provider option; we've resolved the "IPFS not resolved" error, enhancing the stability and reliability of our IPFS integrations.

    2. Enhancements in Policy Import Process Addressing performance issues, we've fixed a critical bug in the circuit traversal loop during policy comparisons, significantly reducing processing times and CPU utilization. Additional optimizations have also been made to improve memory usage.

    3. MGS Vault Integration for BYOD Key To further secure and customize your experience, we've integrated MGS Vault to support Bring Your Own DID (BYOD) Key, allowing for enhanced security and personalization within the MGS framework.

    4. User Interface Improvements This version rolls out several UI enhancements designed to improve interaction and usability across the MGS platform.

    5. Update to Guardian v2.23 Continuing our commitment to staying current with the latest technological advances, MGS has been updated to Guardian core version 2.23, incorporating all the new features and improvements.

    Beta v7 includes Guardian version 2.23, bringing all the latest improvements and fixes from the Guardian platform into MGS.

    For the full changelog and release notes on the open-source Guardian please visit:

    Return Tenant Related Settings

    Return Tenant related settings.

    GET /tenants/settings

    Get Tenant related settings. For Tenant Admin role only.

    {
              content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/TenantSettings'
    }

    Beta v4

    This Beta v4 release bring a new UI, new features, introduction of AI, and more!

    New

    • Revolutionized User Interface: Navigate with ease and enjoy a more intuitive experience.

    • Custom Tenant Branding: Tailor every one of your tenant spaces with unique branding elements for a personalized touch.

    • Enhanced Standard Registry Attributes: Dive into a more comprehensive and detailed asset management journey.

    • MGS Vault Additions: Secure your data with integration options including Azure Key Vault and GCP Secret Manager. Learn more about MGS Vault configurations .

    • Core Guardian Upgrade to v2.20: Experience the pinnacle of our foundational technology, ensuring efficiency and reliability.

    • AI-Powered Search Capabilities: Navigate through data with unprecedented ease and intelligence.

    For the full changelog and release notes on the open-source Guardian please visit:

    Beta v8

    We are excited to introduce MGS Beta V8, packed with new features and improvements to enhance your experience and provide greater flexibility in managing your operations.

    New

    Improve the UI/UX for OpenSource Policy Import Function

    We have added a cleaner and more intuitive way to search for open-sourced policies directly within MGS, making it easier to find and import the policies you need.

    Expose APIs for User Setup Flow

    To improve integration capabilities, we've exposed public APIs for user creation functionalities (e.g., Standard Registry and Policy Users). This allows customers to seamlessly integrate MGS into their existing systems, managing user setup processes including IPFS storage providers and vault selections through their own interfaces.

    Policy Lifecycle Management

    Addressing performance inefficiencies, we've optimized the policy service to handle policy states more effectively. By managing obsolete policies post-Hedera testnet reset, we've reduced unnecessary load and SaaS infrastructure costs. Users can now better manage their policy data, minimizing potential data loss and improving overall satisfaction.

    Updates

    Update MGS to Guardian v2.24

    In our commitment to staying current with technological advances, MGS has been updated to open-source Guardian version 2.24. This update brings all the latest features and improvements from the Guardian platform into MGS.

    Beta V8 includes Guardian version 2.24, incorporating the latest improvements and fixes from the Guardian platform to enhance the overall functionality and reliability of MGS.

    For the full changelog and release notes on the open-source Guardian please visit:

    Send Invite Link

    Send Invite link.

    POST /tenants/invite

    Send an Invite link for a new user. For Tenant Admin role only.

    Request Body

    Name
    Type
    Description

    Part I: Foundation and Preparation

    Establishing the foundational knowledge for methodology digitization on Guardian platform

    Overview

    Part I provides the essential foundation for understanding methodology digitization, the Guardian platform, and the VM0033 reference methodology. This part consists of three focused chapters designed to prepare readers for the technical implementation phases that follow.

    Part V: Calculation Logic Implementation

    Status: ✅ Complete and Available Implementation Focus: VM0033 emission reduction calculations, Guardian Tools architecture, and comprehensive testing frameworks

    This part covers the implementation of calculation logic in Guardian environmental methodologies, with VM0033 as the primary example and AR Tool 14 demonstrating Guardian's Tools architecture.

    Part Overview

    Part V provides comprehensive guidance on implementing and testing calculation logic for environmental methodologies in Guardian:

    Beta v3.1

    Introducing MGS Beta v3.1 - delivering enhanced tenant logs, a faster Guardian experience, Guardian v2.12, and more pre-loaded policies!

    New

    • Tenant Logs

    Two-Factor Authentication (2FA) Setup Guide

    Overview Two-factor authentication (2FA) adds an extra layer of security to your MGS account. Once enabled, signing in will require your password and a one-time code from your mobile device. This applies to all user types, including Tenant Admins, Standard Registry and Policy User accounts.

    Access Your Profile

    • Log into your MGS account.

    • Click your user name at the bottom left of the sidebar.

    Delete Tenant User

    Delete Tenant User

    DELETE /tenants/{tenantId}/users/{userId}

    Delete Tenant User

    Return user Tenants

    Return user Tenants only.

    GET /tenants/user

    Return user Tenants. For Tenant Admin role only.

    Return Users for Tenant

    Return Tenant Users

    POST /tenants/{tenantId}/users

    Return users for Tenant. For Tenant Admin role only.

    Create New Tenant

    Create new Tenant.

    PUT /tenants/user

    Create new Tenant. For Tenant Admin role only.

    Delete Tenant

    Delete Tenant

    POST /tenants/delete

    Delete Tenant and all related data. This action can't be undo. For tenant admin role only.

    Cost Optimization: With the ability to choose from various providers, Tenant Admins can select a cost-effective solution that aligns with their budget constraints.

    Testing and Validation: After integration, thoroughly test the setup to ensure that data storage and retrieval functionalities are working correctly and efficiently within your MGS tenant.

    Chapter 18: Custom Logic Block Development

    Complete implementation of VM0033 emission reduction calculations using Guardian's customLogicBlock, including baseline emissions, project emissions, leakage calculations, and final net emission reductions with real JavaScript production code.

    Chapter 19: Formula Linked Definitions (FLDs)

    Foundation concepts and architectural framework for parameter relationships and dependencies in environmental methodologies, establishing patterns for future FLD implementation.

    Chapter 20: Guardian Tools Architecture and Implementation

    Complete guide to building Guardian Tools using AR Tool 14 as practical example, covering the extractDataBlock → customLogicBlock → extractDataBlock mini-policy pattern for standardized calculation tools.

    Chapter 21: Calculation Testing and Validation

    Comprehensive testing framework using Guardian's dry-run mode and customLogicBlock testing interface, with validation against VM0033 test artifacts at every calculation stage.

    Key Artifacts and Resources

    • VM0033 Test Spreadsheet - Official Allcot test case

    • Final PDD VC - Complete Guardian VC with net ERR data

    • ER Calculations - Production JavaScript implementation

    • AR Tool 14 Implementation - Complete Guardian Tool configuration

    Prerequisites for Part V

    • Completed Parts I-IV: Foundation through Policy Workflow Implementation

    • Understanding of Guardian's Policy Workflow Engine (PWE)

    • Basic JavaScript programming knowledge

    • Familiarity with environmental methodology calculations

    Learning Outcomes

    After completing Part V, you will be able to:

    ✅ Implement calculation logic using Guardian's customLogicBlock with real production examples ✅ Build Guardian Tools using the extractDataBlock and customLogicBlock pattern ✅ Test and validate calculations using Guardian's dry-run mode and testing interfaces ✅ Debug calculation issues using Guardian's built-in debugging tools ✅ Create production-ready environmental methodology implementations

    Next Steps

    Part V completes the core implementation knowledge needed for Guardian methodology digitization. Future parts will cover:

    • Part VI: Integration and Testing - End-to-end policy testing and API automation

    • Part VII: Deployment and Maintenance - Production deployment and user management

    • Part VIII: Advanced Topics - External system integration and troubleshooting


    Part V Complete: You now have comprehensive knowledge of calculation logic implementation in Guardian, from individual customLogicBlocks to complete testing frameworks. These skills enable building production-ready environmental methodologies with confidence in calculation accuracy.

    Tenant Admins can now access comprehensive logs specific to their tenant activity.
  • Guardian v2.12 Upgrade

    • Improved minting speed due to new batching process.

    • Enhanced error handling for smoother operation.

    • Improved memory performance for faster processing.

    • Artifact tagging for easier identification and handling.

    • Enhanced policy configurator now offers customizable "themes".

    • Overall, expect a quicker user experience.

  • More Pre-loaded Policies

    • Addition of more pre-loaded policies for a more comprehensive policy creator experience.

  • This will only cover what is new and improved with the Managed Guardian Service. For the full changelog and release notes on the open-source Guardian please visit: https://github.com/hashgraph/guardian/releases

    Click the three-dot (ellipsis) menu next to your user name and select Profile.
  • Find "Security Settings."

  • Click Setup next to Two-factor authentication.

  • Start the Setup Process

    • A window will open titled “Enable two-factor authentication.”

    Scan the QR Code or Enter the Key

    • Open an authenticator app on your mobile device (such as Authy, Google Authenticator, or similar).

    • Scan the QR code displayed on the screen.

    • If you cannot scan the code, copy the provided key and enter it manually into your authentication app.

    Enter the Code From Your Authenticator App

    • The authenticator app will generate a 6-digit code.

    • Enter this code in the “Code” field.

    • Click Enable.

    Download Your Recovery Codes

    • After enabling 2FA, you will be prompted to download your recovery codes.

    • Save these codes in a safe place. If you ever lose access to your authenticator app, you can use a recovery code to log in.

    2FA Status Confirmation

    • Once setup is complete, your profile will display: Two-factor authentication true: Active

    • You can deactivate 2FA at any time from this screen if needed.

    Additional Notes

    • 2FA is optional but strongly recommended for all users.

    • The setup process is the same for both Standard Registry and Policy User accounts.

    • If you lose both your authenticator app and recovery codes, contact MGS support for assistance.

    Path Parameters
    Name
    Type
    Description

    tenantId*

    String

    Tenant ID

    userId*

    String

    User ID

    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/SuccessResponse'
    }
    {
        // Response
    }
    {
        // Response
    }
    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/Error'
    }
    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/Error'
    }
    Request Body
    Name
    Type
    Description

    Array

    New Tenant fields.

    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/Tenant'
    }
    {
        // Response
    }
    {
        // Response
    }
    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/Error'
    }

    Request Body
    Name
    Type
    Description

    Array

    Tenant ID and Tenant name for confrim delete

    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/SuccessResponse'
    }
    {
        // Response
    }
    {
        // Response
    }
    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/Error'
    }
    https://github.com/hashgraph/guardian/releases
  • Renewable Energy Credits Policy:

    • International Renewable Energy Credit Standard Developed by Envision Blockchain

  • Carbon Emission Policies:

    • Remote Work GHG Policy Developed by Envision Blockchain

    • Carbon Emissions Measurements - GHG Corporate Standard Developed by TYMLEZ

  • https://github.com/hashgraph/guardian/releases
    https://github.com/hashgraph/guardian/releases
    https://github.com/hashgraph/guardian/releases
    Web3.Storage
    filebase
    https://github.com/hashgraph/guardian/releases
    https://github.com/hashgraph/guardian/releases
    https://github.com/hashgraph/guardian/releases
    here
    https://github.com/hashgraph/guardian/releases
    https://github.com/hashgraph/guardian/releases
    previous generation steps
    :
    '#/components/schemas/Tenant'
    }
    {
        // Response
    }
    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/Error'
    }
    {
        // Response
    }
    {
        // Response
    }
    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/Error'
    }

    Array

    Tenant Invite

    {
    
    content:
                application/json:
                  schema:
                    oneOf:
                      - $ref: '#/components/schemas/SuccessInviteWithCode'
                      - $ref: '#/components/schemas/SuccessResponse'
    }
    {
        // Response
    }
    {
        // Response
    }
    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/Error}
    {
        // Response
    }
    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/Error'
    }

    {
    content:
                application/json:
                  schema:
                    type: object
                    properties:
                      totalCount:
                        type: number
                      tenants:
                        $ref: '#/components/schemas/Tenant'
    }
    {
        // Response
    }
    Dashboard Console
    Or

    If its an existing Admin account with different users having same email address, then we will be able to select from list of users (SRs/Users/Tenants).

    You’ll need to enter your Hedera Account ID and Private Key.

    • If you do not have a Hedera account:

      • For Testnet:

        1. Visit the Hedera Developer Portal.

        2. Create a new Testnet account.

        3. Choose the ED25519 key type — do not select ECDSA.

        4. Download or copy the DER Encoded Private Key — do not use the HEX Encoded format.

      • For Mainnet:

        1. Use a Hedera-enabled wallet (e.g., ).

        2. Create a Mainnet account and ensure it is funded with HBAR.

        3. Export the ED25519 key in DER Encoded format.

    ⚠️ Only ED25519 keys in DER format are supported by the Managed Guardian Service.

    Digital Identity (DID) Setup

    Next, set up your digital identity. You can either:

    • Allow MGS to create a new DID document for you, or

    • Select "Bring Your Own DID", in which case you’ll need to input your existing DID keys.

    Organization Profile

    Fill out your company profile. This information will appear in the Standard Registry Hedera Topic for network visibility.Fill out the company profile for visibility in the standard registry Hedera topic.

    Please refer to our IPFS Service Providers documentation for Compatible IPFS Service Providers and guides.
    MGS Vault documentation for compatible vaults
    here
    Chapters

    Chapter 1: Introduction to Methodology Digitization

    Chapter 2: Understanding VM0033 Methodology

    Reading Time: ~18 minutes Purpose: Provide domain knowledge of VM0033 before technical implementation

    Chapter 2: Understanding VM0033 Methodology

    Reading Time: ~18 minutes Purpose: Provide comprehensive domain knowledge of VM0033 before technical implementation

    Chapter 3: Guardian Platform Overview

    Reading Time: ~6 minutes Purpose: Introduce Guardian's technical architecture and capabilities for methodology developers

    Learning Outcomes

    After completing Part I (~32 minutes total reading time), readers will:

    • Understand the methodology digitization process and its benefits

    • Have knowledge of VM0033 structure and requirements

    • Understand Guardian platform architecture and capabilities

    • Be prepared to begin methodology analysis and technical implementation

    Prerequisites

    • Basic understanding of environmental methodologies and carbon markets

    • Familiarity with JSON and basic programming concepts

    • Access to Guardian platform instance for hands-on practice

    • VM0033 methodology document for reference

    Progress Tracking

    Track your progress through Part I:

    Navigation

    Handbook Structure

    • Current: Part I - Foundation and Preparation

    • Next: Part II: Analysis and Planning (Coming Soon) - Systematic methodology analysis

    Resources

    • Guardian Documentation - Complete platform documentation

    • VM0033 Parsed Content - Source methodology

    • Methodology Library - Additional methodologies

    • Guardian Architecture - Technical architecture details

    • - Integration guidance

    Next Steps

    Upon completion of Part I, proceed to Part II: Analysis and Planning (coming soon) to begin systematic methodology analysis and implementation planning.

    Path Parameters
    Name
    Type
    Description

    tenantID*

    String

    Tenant ID

    Request Body

    Name
    Type
    Description

    String

    Users filters

    {
    content:
                application/json:
                  schema:
                    type: object
                    properties:
                      totalCount:
                        type: number
                      tenant:
                        $ref
    
    {
        // Response
    }
    {
        // Response
    }

    Shared Resources

    Common templates, frameworks, and systems used across all parts of the Methodology Digitization Handbook

    Overview

    This directory contains shared infrastructure used across all parts (I-VIII) of the Methodology Digitization Handbook to ensure consistency, quality, and maintainability.

    Shared Components

    Standard templates for consistent content structure across all chapters and parts

    System for ensuring accurate VM0033 references throughout all handbook content

    System for linking handbook content with existing Guardian documentation

    Comprehensive collection of test artifacts, Guardian implementations, calculation tools, and validation materials including:

    • VM0033 Reference Materials: Complete methodology documentation and Guardian policy implementation

    • Test Data & Validation: Official test cases, real project data, and Guardian VC documents

    • Guardian Tools & Code: Production implementations including AR Tool 14 and calculation JavaScript

    • Schema Templates: Excel-first schema development templates for Guardian integration

    Usage Guidelines

    For Content Developers

    1. Use Standard Templates: All chapters must follow templates in templates/

    2. Follow VM0033 Integration: Use vm0033-integration/ system for all methodology references

    3. Link Guardian Docs: Follow guardian-integration/ patterns for existing documentation

    For Methodology Implementers

    1. Start with Artifacts: Use test artifacts and reference implementations as foundation

    2. Validate Calculations: All implementations must match test artifact results exactly

    3. Use Production Code: Reference er-calculations.js and AR-Tool-14.json for proven patterns

    For Part Maintainers

    1. Reference Shared Systems: Link to shared infrastructure rather than duplicating

    2. Contribute Improvements: Enhance shared systems for all parts

    3. Validate Compliance: Ensure part-specific content follows shared standards

    4. Update Artifacts: Keep artifact collection current with platform changes

    Integration with Parts

    Each part should reference these shared systems:

    Maintenance

    Shared System Updates

    • Updates to shared systems benefit all parts automatically

    • Version control ensures consistency across handbook

    • Centralized maintenance reduces duplication

    • Artifact collection updated with Guardian platform evolution

    Quality Assurance

    • Calculation Accuracy: All artifacts validated against methodology requirements

    • Guardian Compatibility: Production code tested in Guardian environment

    • Test Coverage: Comprehensive test cases covering all calculation scenarios

    • Documentation Quality: All artifacts include usage instructions and integration examples


    Complete Shared Infrastructure: This comprehensive shared system provides templates, integration frameworks, and a complete artifact collection including production Guardian implementations, test data, and validation materials. Everything needed for methodology digitization is centralized here for consistency and efficiency.

    Artifact Collection Highlights: The artifacts collection includes real production code (er-calculations.js), complete Guardian Tools (AR-Tool-14.json), official test cases (VM0033_Allcot_Test_Case_Artifact.xlsx), and Guardian-ready documents (final-PDD-vc.json) for comprehensive testing and validation.

    Part VIII: Advanced Topics and Best Practices

    Advanced integration techniques, troubleshooting procedures, and expert-level methodology implementation patterns

    Part VIII covers advanced topics for expert-level methodology implementation, including sophisticated external system integration, comprehensive troubleshooting procedures, and best practices learned from production deployments.

    Overview

    Building on operational deployment from Part VII, Part VIII addresses complex integration scenarios, advanced troubleshooting techniques, and optimization strategies for large-scale methodology implementations serving thousands of users.

    Why Advanced Topics Matter:

    • Complex integration scenarios require sophisticated architectural patterns

    • Production issues demand systematic troubleshooting and resolution procedures

    • Performance optimization enables scaling to enterprise-level deployments

    • Best practices prevent common pitfalls and ensure long-term success

    Part VIII Structure

    Bidirectional data exchange between Guardian and external platforms. Covers data transformation using VM0033's dataTransformationAddon block and external data reception using MRV configuration patterns from metered energy policies.

    Common problems encountered during methodology digitization and their solutions, with specific examples from VM0033 implementation. Covers debugging techniques, performance optimization, and issue resolution.

    Prerequisites

    From Previous Parts

    • Parts I-VII: Complete methodology implementation through production deployment

    • Experience with methodology operations and user management

    • Understanding of production system monitoring and maintenance

    Technical Requirements

    • Advanced Guardian platform knowledge and API expertise

    • Experience with external system integration and data transformation

    • Understanding of production troubleshooting and debugging techniques

    Learning Outcomes

    After completing Part VIII, you will be able to:

    Advanced Integration Mastery

    • Implement data transformation using dataTransformationAddon blocks with JavaScript

    • Configure external data reception using externalDataBlock and MRV patterns

    • Handle Guardian-to-external system data export and formatting

    • Set up automated monitoring data collection from external devices and systems

    Expert Troubleshooting

    • Diagnose and resolve complex methodology implementation issues

    • Optimize performance for large-scale production deployments

    • Implement comprehensive monitoring and alerting systems

    • Handle edge cases and unusual integration scenarios

    Best Practices Implementation

    • Apply proven patterns from successful methodology deployments

    • Avoid common pitfalls and implementation mistakes

    • Optimize for maintainability, scalability, and performance

    • Establish expert-level quality assurance and testing procedures

    Implementation Timeline

    Chapter 27 (External Integration): 3-4 hours

    • Advanced integration pattern implementation

    • Enterprise system connectivity and data transformation

    Chapter 28 (Troubleshooting): 2-3 hours

    • Comprehensive troubleshooting procedures and issue resolution

    • Performance optimization and advanced debugging techniques

    Total Part VIII Time: 5-7 hours for advanced mastery and expert-level implementation

    Status

    ✅ Available - Part VIII chapters are complete and ready for use.


    Completion: Part VIII completes the Methodology Digitization Handbook, providing comprehensive coverage from foundation concepts through expert-level implementation and troubleshooting.

    Templates

    Standard templates for consistent content structure across all handbook parts

    Overview

    These templates ensure consistent structure, formatting, and quality across all chapters in the Methodology Digitization Handbook (Parts I-VIII).

    Available Templates

    Standard structure for individual chapter sections with:

    • GitBook formatting (hints, tabs, collapsible sections)

    • VM0033 integration points

    • User input requirement markers

    • Guardian documentation reference patterns

    Standard structure for chapter summaries with:

    • Key takeaways organization

    • Next chapter preparation

    Template Usage Guidelines

    Available Elements

    Templates may or may not include these elements depending on chapter context:

    • Learning Objectives: Specific, measurable outcomes

    • Prerequisites: Clear requirements and dependencies

    • VM0033 Context: Practical methodology examples

    • Guardian Integration: Links to existing documentation

    GitBook Formatting Standards

    • Hint Blocks: <div data-gb-custom-block data-tag="hint" data-style='info|success|warning|danger'></div>

    • Tabs: <div data-gb-custom-block data-tag="tabs"></div> and `

    `

    • Collapsible Sections: <details><summary>Title</summary>Content</details>

    • Code Blocks: Proper syntax highlighting

    • Cross-References: Consistent linking patterns

    Content Quality Requirements

    • Reading Time Constraints: Specific time limits per template type

    • Dual Audience Focus: Content serves both Verra maintenance and newcomer learning

    • Practical Focus: Emphasis on actionable guidance over theory

    • Accuracy Requirements: All examples must be user-validated

    Template Customization

    Part-Specific Adaptations

    Templates can be adapted for specific parts while maintaining core structure:

    • Part-specific learning objectives

    • Relevant Guardian documentation references

    • Appropriate VM0033 examples for the part's focus

    Chapter-Specific Modifications

    Individual chapters may modify templates for specific needs:

    • Additional sections for complex topics

    • Specialized validation procedures

    • Extended examples for difficult concepts

    • Custom formatting for technical content

    Quality Assurance

    Template Compliance Validation


    Template Usage: All handbook content must follow these templates to ensure consistency, quality, and maintainability across all parts.

    Azure B2C Single Sign-On (SSO) Integration Guide

    Overview Managed Guardian Service (MGS) supports Single Sign-On (SSO) through Azure B2C for organizations integrating their own front-end application with MGS. This capability is available as part of the Cortex integration pattern, allowing organizations to use their existing Azure B2C tenant for authentication. Azure B2C SSO is not available in the default MGS UI—it is supported only for integrated front ends.

    Key Points

    • Azure B2C SSO can be enabled for any MGS tenant, but configuration is tenant-specific (one Azure B2C connection per tenant).

    • All Azure B2C application setup and management must be performed in the end user’s Azure portal before connecting to MGS.

    • Only tenant admins can configure Azure B2C SSO in MGS.

    Prerequisites

    • An Azure B2C tenant and application registered in the organization’s Azure portal.

    • The following details from Azure B2C:

      • Issuer URL

      • Application (Client) ID

    Enabling Azure B2C SSO in MGS

    1. Create or Select a Tenant

    • Log into the MGS admin interface.

    • As a tenant admin, create a new tenant or select an existing tenant from the “Tenants” list.

    2. Access the Azure B2C Tab

    • Click “Open” for the desired tenant.

    • Navigate to the Azure B2C tab in the tenant configuration.

    3. Enable Azure B2C

    • Click the Enable button.

    4. Enter Azure B2C Details

    • Fill in the following fields using information from your Azure B2C portal:

      • Issuer URL (e.g., https://your-tenant-name.b2clogin.com/your-tenant-id/v2.0/)

      • Application (Cliet) ID (from the Azure B2C registered application)

      • JWKS URL (public key set endpoint, typically available from Azure B2C)

    • Click Save Changes.

    5. Confirm Configuration

    • Once saved, MGS will use your Azure B2C settings for authentication to this tenant through your integrated/custom front end.

    Notes

    • Azure B2C setup and application registration must be completed in your own Azure portal. MGS only connects to the already-configured Azure B2C app.

    • If you need to disable or update Azure B2C, use the Disable button or update the configuration fields as needed.

    • Azure B2C SSO is not available on the default MGS user interface; it is supported only through integrated or custom UI implementations following the Cortex integration pattern.

    Troubleshooting

    • Ensure all URLs and IDs are entered correctly from your Azure B2C portal.

    • For issues with SSO login, verify the Azure B2C configuration and application permissions in Azure.

    • Contact your organization’s Azure administrator or MGS support for assistance.

    Part VI: Integration and Testing

    Complete methodology validation and production deployment preparation using Guardian's testing and API frameworks

    Part VI transforms your methodology implementation from working calculations into production-ready systems. Using VM0033's patterns and Guardian's testing capabilities, you'll learn to validate complete workflows, automate operations through APIs, and prepare for large-scale deployment.

    Overview

    Building on the calculation logic from Part V, Part VI focuses on system-level validation and operational readiness. These chapters teach you to test methodology implementations as complete systems, automate workflows through Guardian's API framework, and validate production readiness using real-world scenarios.

    Why Integration and Testing Matters:

    How to Generate Web3.Storage Key and Proof

    For additional information, please visit:

    {
       
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/Error'
    
    }
    {
    content:
                application/json:
                  schema:
                    $ref: '#/components/schemas/Error'
    
    }
    :
    '#/components/schemas/Tenant'
    users:
    type: array
    items:
    $ref: '#/components/schemas/UserShort'
    }
    • Methodology implementations serve thousands of users across complex workflows

    • Carbon credit accuracy and compliance depend on complete system validation

    • Production operations require automated data processing and error handling

    • Integration with external systems enables scalable monitoring and reporting

    Part VI Structure

    Chapter 22: End-to-End Policy Testing

    Test complete methodology workflows across all stakeholder roles using Guardian's dry-run capabilities. Learn to create virtual users, simulate multi-year project lifecycles, and validate complex state transitions using VM0033's production patterns.

    Key Learning:

    • Multi-role testing with virtual users (Project Proponent, VVB, Standard Registry)

    • Complete workflow simulation from PDD submission to token issuance

    • Production-scale data validation and performance testing

    • Cross-component integration validation

    Chapter 23: API Integration and Automation

    Automate methodology operations using Guardian's REST API framework. Build production-ready automation systems, integrate with external monitoring platforms, and create comprehensive testing suites for continuous validation.

    Key Learning:

    • Guardian API authentication and endpoint mapping

    • Automated data submission using VM0033 policy block APIs

    • Virtual user management for programmatic testing

    • Cypress integration for automated regression testing

    Prerequisites

    From Previous Parts

    • Part I-II: Understanding of Guardian platform and methodology analysis

    • Part III: Production-ready schemas for data capture and processing

    • Part IV: Complete policy workflows with stakeholder role management

    • Part V: Working calculation logic with individual component testing

    Technical Requirements

    • Guardian platform access with API capabilities

    • VM0033 policy imported and configured for dry-run testing

    • Development tools for API testing (Postman, curl, or similar)

    • Basic understanding of automated testing concepts

    Learning Outcomes

    After completing Part VI, you will be able to:

    Testing Mastery

    • Multi-Role Testing: Create and manage virtual users for complete stakeholder workflow validation

    • Production Simulation: Test methodology implementations under realistic data volumes and user loads

    • Integration Validation: Ensure seamless operation between schemas, workflows, and calculations

    • Error Handling: Validate error conditions and edge cases across complete methodology workflows

    API Integration Excellence

    • Automated Operations: Build production-ready automation systems using Guardian's API framework

    • External Integration: Connect methodology workflows with monitoring systems and external registries

    • Testing Automation: Create comprehensive testing suites for continuous validation and regression testing

    • Production Deployment: Prepare methodology implementations for large-scale operational deployment

    Production Readiness

    • Scalability Validation: Confirm methodology implementations handle production user volumes and data processing

    • Operational Monitoring: Implement monitoring and alerting for production methodology operations

    • Maintenance Procedures: Establish procedures for ongoing methodology maintenance and updates

    • Stakeholder Readiness: Prepare documentation and training materials for methodology users

    Success Metrics

    For Methodology Developers

    • Confident deployment of methodology implementations in production environments

    • Automated testing reducing manual validation effort by >80%

    • Integration capabilities enabling connection with organizational systems

    • Scalable operations supporting hundreds of concurrent projects

    For Technical Teams

    • Complete testing coverage validating all methodology workflows and calculations

    • API automation enabling programmatic methodology operations and integration

    • Production monitoring and alerting systems ensuring methodology reliability

    • Maintenance procedures supporting long-term methodology operations

    For Standards Organizations

    • Reduced operational overhead through automated workflow processing

    • Improved data quality through comprehensive validation and error handling

    • Enhanced user experience through reliable, scalable methodology implementations

    • Lower support burden through robust testing and error prevention

    Implementation Timeline

    Chapter 22 (End-to-End Testing): 3-4 hours

    • Multi-role testing framework setup and execution

    • VM0033 complete workflow validation

    • Production-scale testing and performance validation

    Chapter 23 (API Integration): 2-3 hours

    • Guardian API authentication and endpoint mapping

    • Automated workflow development and testing

    • External system integration patterns

    Total Part VI Time: 5-7 hours for complete integration and testing mastery

    Getting Started

    Begin with Chapter 22: End-to-End Policy Testing to establish comprehensive testing frameworks, then proceed to Chapter 23: API Integration and Automation to automate operations and prepare for production deployment.

    Part VI completes your methodology digitization journey, transforming individual components into production-ready systems that scale to serve thousands of users while maintaining accuracy and compliance with methodology requirements.


    Next Steps: After completing Part VI, your methodology implementation is ready for production deployment. Parts VII-VIII (coming soon) will cover deployment procedures, maintenance protocols, and advanced integration patterns.

    Understand stakeholder ecosystem and roles
  • Understand blockchain integration and user management
  • API Guidelines
    Chapter 27: Integration with External Systems
    Chapter 28: Troubleshooting and Common Issues

    JWKS URL

  • Development Tools: Python extractors and validation utilities

  • Leverage Artifacts: Use artifacts/ collection for testing, validation, and implementation examples

  • Test with Real Data: Validate all examples against official test cases and production implementations

  • Test Thoroughly: Use Guardian's dry-run mode with provided test documents
  • Follow Patterns: Use schema templates and policy examples for consistent implementation

  • Test Integration: Verify all shared resources work with latest Guardian versions

    Version Consistency: Shared resources maintain compatibility across handbook parts

    Templates
    VM0033 Integration System
    Guardian Integration
    Artifacts Collection
    Testing and validation sections

    User Input Requirements: Explicit markers for required input

  • Validation Procedures: Testing and verification methods

  • Chapter Section Template
    Chapter Summary Template
    <!-- In each part's README.md -->
    ## Content Development Guidelines
    
    This part follows the shared handbook infrastructure:
    - **Templates**: [Shared Templates](../_shared/templates/README.md)
    - **VM0033 Integration**: [VM0033 System](../_shared/vm0033-integration/README.md)
    - **Guardian Integration**: [Guardian System](../_shared/guardian-integration/README.md)
    - **Artifacts Collection**: [Test Data & Implementation Examples](../_shared/artifacts/README.md)
    
    ## Testing & Validation
    
    All examples and implementations in this part are validated against:
    - **Official Test Cases**: VM0033_Allcot_Test_Case_Artifact.xlsx
    - **Production Code**: er-calculations.js and AR-Tool-14.json
    - **Guardian Integration**: final-PDD-vc.json and vm0033-policy.json
    ## Template Compliance Checklist
    
    For each chapter section:
    - [ ] Follows appropriate template structure
    - [ ] Includes all required elements
    - [ ] Uses proper GitBook formatting
    - [ ] Marks user input requirements
    - [ ] Links to Guardian documentation appropriately
    - [ ] Meets reading time constraints
    - [ ] Serves dual audience effectively
    Step By Step Process

    Following are the steps to follow to generate Web3.Storage API values:

    1. Create an account on https://web3.storage, please specify the email you have access to as the account authentication is based on the email validation. Make sure to follow through the registration process to the end, choose an appropriate billing plan for your needs (e.g. 'starter') and enter your payment details.

    1. Install w3cli as described in the corresponding section of the web3.storage documentation.

    You'll need Node version 18 or higher, with NPM version 7 or higher to complete the installation

    You can check your local versions like this:

    Install the @web3-storage/w3cli package with npm

    3. Create your 'space' as described in the 'Create your first space' section of the documentation.

    1. Execute the following to set the Space you intend on delegating access to:

    1. Execute the following command to retrieve your Agent private key and DID:

    Note: The private key (starting with Mg...) is the value to be used in the environment variable IPFS_STORAGE_KEY.

    1. Retrieve the IPFS_STORAGE_PROOF by executing the following:

    The output of this command is the value to be used in the environment variable IPFS_STORAGE_PROOF.

    To summarize, the process of configuring delegated access to the w3up API consists of execution of the following command sequence:

    1. w3 login

    2. w3 create space

    3. w3 use space

    4. npx ucan-key ed

    5. w3 delegation

    Demo Video

    Youtube

    https://web3.storage/docs/#quickstart
    Step By Step Process
    Demo Video
    HashPack

    Part II: Analysis and Planning

    Methodology analysis and digitization planning using the approach developed during VM0033 digitization

    Part II transforms the foundation established in Part I into practical, actionable digitization plans through analysis and planning. Building directly on your understanding of methodology digitization concepts, VM0033 domain knowledge, and Guardian platform capabilities, this section shares the workflow developed during the first VM0033 methodology digitization project.

    The four chapters in Part II follow the sequence we found most effective during VM0033 digitization: methodology decomposition → equation mapping → tool integration → test artifact development. Each chapter builds incrementally toward the technical implementation phases that will come in Part III, ensuring you have the analysis and planning foundation needed for successful digitization.

    The Approach We Developed

    Part II follows the approach developed during VM0033 digitization work. This workflow emerged from experience working through environmental methodology requirements and represents what was learned about moving from methodology understanding to implementation readiness.

    The Analysis Approach:

    1. Methodology Analysis (Chapter 4): Break down methodology documents into manageable components using structured reading techniques

    2. Mathematical Component Extraction (Chapter 5): Use recursive analysis to identify all equations and parameter dependencies

    3. External Dependencies Integration (Chapter 6): Handle CDM tools, VCS modules, and other external calculation resources

    4. Validation Framework Creation

    This sequence ensures that no critical elements are missed while building toward implementation. Each step validates and builds upon previous work, reducing the risk of discovering missing requirements during technical development.

    Why This Approach Works: Starting with broad understanding, then progressively narrowing focus to specific components, handling external dependencies, and finally creating validation frameworks helps manage methodology complexity while ensuring important requirements aren't missed. This natural problem-solving approach works well for methodology digitization by keeping each stage manageable while building toward implementation.

    Chapter Progression and Learning Objectives

    Focus: Approach to reading and analyzing methodology PDFs, identifying key components, stakeholders, and workflow requirements.

    What You'll Learn: Techniques for breaking down methodologies like VM0033 into digitization-ready components. You'll learn structured reading approaches that focus on core methodology sections, parameter extraction techniques, and recursive analysis fundamentals that serve as the foundation for all subsequent work.

    VM0033 Application: Step-by-step analysis of VM0033's structure, demonstrating how to identify and prioritize the most critical sections for digitization. You'll see how VM0033's complexity can be decomposed into manageable components while maintaining the integrity of the overall methodology requirements.

    Focus: Mathematical component extraction using recursive analysis techniques starting from final emission reduction formulas.

    What You'll Learn: The recursive approach to equation mapping that helps ensure no mathematical dependencies are missed. You'll learn parameter classification systems, dependency tree construction, and documentation techniques that create calculation frameworks ready for implementation.

    VM0033 Application: Mapping of VM0033's emission reduction equations, including baseline emissions, project emissions, and leakage calculations. You'll work through actual VM0033 equations using the recursive approach, building dependency trees that capture all parameter relationships.

    Focus: Approach to handling external tools and modules that methodologies reference, creating unified calculation frameworks.

    What You'll Learn: Integration techniques for CDM tools, VCS modules, and other standardized calculation components. You'll learn to create cohesive calculation systems that integrate multiple external dependencies while managing versioning and compatibility requirements.

    VM0033 Application: Integration of the tools implemented for VM0033, including AR-Tool14 for biomass calculations, AR-Tool05 for fossil fuel emissions, and AFLOU for risk assessment. You'll see how to create unified frameworks that incorporate external calculation resources while maintaining VM0033's specific requirements.

    Focus: Creating test spreadsheets that serve as validation benchmarks for digitized methodology implementations.

    What You'll Learn: How to work with Verra to develop test scenarios using real Allcot project data, covering all methodology pathways and creating input datasets. You'll learn to create test artifacts that serve as accuracy standards for digital policy validation.

    VM0033 Application: Development of VM0033 test spreadsheet with multiple project scenarios covering different wetland types and restoration activities. Using the actual VM0033 test case artifact, you'll understand how test frameworks validate digitized methodologies.

    Building on Part I Foundation

    Part II assumes you have completed Part I and builds directly on that foundation. The concepts introduced in Part I - methodology digitization principles, VM0033 domain knowledge, and Guardian platform capabilities - form the essential context for the analysis and planning techniques introduced in Part II.

    Progressive Technical Depth: While Part I focused on understanding and context, Part II introduces the technical rigor needed for implementation. However, the technical depth remains focused on analysis and planning rather than coding or configuration. You'll work with methodology content, equation structures, and test frameworks, but the implementation details come in Part III.

    Practical Industry Focus: Every technique in Part II comes from real-world digitization projects. The approaches, recursive analysis methods, and integration techniques represent practices used successfully in methodology implementations like VM0033.

    Part II Completion and Part III Readiness

    Completing Part II ensures you have the analysis and planning foundation needed for Part III (Schema Design and Development). The approach developed through these four chapters provides the detailed understanding required for technical implementation.

    What You'll Have Accomplished:

    • Methodology analysis skills applicable to any environmental methodology

    • Mathematical component extraction using recursive techniques

    • External tool integration planning and unified framework design

    • Validation framework with test artifacts serving as accuracy benchmarks

    Preparation for Part III: The detailed analysis and planning work in Part II directly supports the schema design and policy workflow development covered in Part III. The parameter classifications, dependency trees, and test artifacts created in Part II become the foundation for Guardian schema development and policy implementation.

    Time Investment and Learning Approach

    Part II is designed for focused, practical learning with each chapter requiring 15-20 minutes of reading time. The total investment of approximately 60-80 minutes provides comprehensive analysis and planning capabilities that significantly reduce the time required for technical implementation phases.

    Recommended Approach: Complete Part II chapters sequentially, as each builds on previous analysis work. The systematic progression ensures you develop comprehensive analysis capabilities while maintaining practical focus on implementation preparation.

    The industry techniques introduced in Part II represent knowledge gained through real-world methodology digitization experience. Mastering these systematic approaches provides the foundation for efficient, accurate methodology implementation across any environmental standard or framework.


    Chapter Navigation

    Chapter
    Title
    Focus
    Reading Time

    Sequential Learning: Complete chapters in order for optimal learning progression and systematic skill development.

    Ready to Begin: With Part I foundation complete, you're prepared for the systematic analysis and planning techniques in Part II. Start with Chapter 4 to begin learning the methodology digitization approach we developed.

    Chapter 12: Schema Testing and Validation Checklist

    After defining schemas, you need to test and validate them before deployment. This chapter provides a practical checklist to ensure your schemas work correctly and provide good user experience.

    Schema Validation Checklist

    1. Set Default, Suggested, and Test Values

    Add values to help users and enable testing. These are helpful but not mandatory.

    In Guardian Schema Editor:

    • Default Value: Pre-filled value that appears when users first see the field

    • Suggested Value: Recommended value shown to guide users

    • Test Value: Value used for testing schema functionality

    Example Values Setup:

    Benefits:

    • Users see helpful starting values

    • Testing becomes easier with pre-filled data

    • New users understand expected input formats

    2. Preview and Test Schema Functionality

    Use Guardian's preview feature to test your schema before deployment.

    Preview Testing Process:

    1. Click "Preview" in Guardian schema interface

    2. Fill out form fields using test values

    3. Test conditional logic by changing enum selections

    4. Verify required field validation works

    Test These Elements:

    3. Update Schema UUIDs in Policy Workflows

    Insert your new schema UUIDs where documents are requested or listed in policy workflow blocks.

    UUID Replacement Process:

    1. Copy new schema UUID from JSON schema (click hamburger menu next to schema row, click on "Schema")

    2. Open policy workflow configuration

    3. Find blocks that use old schema references:

      • requestVcDocumentBlock

    4. Verify Test Artifact Completeness

    Ensure no fields are missing compared to your test artifact design from Part II.

    Completeness Check:

    1. Open your test artifact spreadsheet from Part II analysis

    2. List all required parameters from methodology

    3. Check each parameter has corresponding schema field

    4. Verify calculation fields capture all intermediate results

    Missing Field Checklist:

    5. Optimize Logical Flow and User Experience

    Organize fields and sections for intuitive user experience.

    UX Organization Principles:

    • Logical Grouping: Group related fields together

    • Progressive Disclosure: Basic information first, complex details later

    • Clear Labels: Use terminology familiar to domain experts

    • Helpful Ordering: Required fields before optional ones

    Field Organization Checklist:

    Example Logical Flow:

    Once schemas pass this validation checklist, they're ready for integration into Guardian policy workflows. Well-tested schemas provide:

    • Smooth user experience for data entry

    • Accurate data types for calculations

    • Proper validation to prevent errors

    • Clear organization for efficient workflows

    The next part of the handbook covers policy workflow design, where these validated schemas integrate with Guardian's policy engine to create complete methodology automation.

    Chapter 19: Formula Linked Definitions (FLDs)

    Understanding Guardian's parameter relationship framework for environmental methodologies

    This chapter details the use of Formula Linked Definitions (FLDs) and how it enables users to view/cross-check human readable mathematical representations of the customLogicBlock calculations whenever they look at relevant schemas, policies or documents with data. It will also describe how to create Formula Linked Definitions by linking the relevant fields in schemas with the parameters in the mathematical equations of Methodology.

    Once the FLDs are created, when the particular Verifiable Credentials (VCs)/Schemas are viewed in the published policy. The formulas will be displayed alongside the relevant fields enabling users such as VVBs and auditors to verify that the formulas are in sync with the methodology and the calculations are accurate.

    Learning Objectives

    node --version && npm --version
    npm install -g @web3-storage/w3cli
    w3 space create
    w3 space use
    npx ucan-key ed
    w3 delegation create <did_from_ucan-key_command_above> | base64
    Getting Started with the Managed Guardian Service Video Guide
    (Chapter 7): Develop test artifacts that serve as accuracy benchmarks

    Test Artifact Development

    Validation framework and test benchmark creation

    ~15-20 min

    4

    Methodology Analysis and Decomposition

    Systematic document analysis and component identification

    ~15-20 min

    5

    Equation Mapping and Parameter Identification

    Recursive mathematical component extraction

    ~15-20 min

    6

    Tools and Modules Integration

    External dependency integration and unified frameworks

    Chapter 4: Methodology Analysis and Decomposition
    Chapter 5: Equation Mapping and Parameter Identification
    Chapter 6: Tools and Modules Integration
    Chapter 7: Test Artifact Development

    ~15-20 min

    Test sub-schema loading and visibility
  • Check file upload fields accept appropriate formats

  • Sub-schemas load and display correctly

    documentsSourceAddon

  • Replace old UUID with new schema UUID

  • Save policy configuration

  • Confirm evidence fields cover all required documentation

    Time-series fields exist for monitoring schemas

    Conditional Logic: Show relevant fields based on previous selections

    Optional fields appear after required ones
  • Reliable foundation for policy automation
    Schema edit interface showing test, suggested and default values
    alt text
    JSON edit mode for a block
    After completing this chapter, you will be able to:
    • Understand the concept and architecture of Formula Linked Definitions in Guardian

    • Identify parameter relationships suitable for FLD implementation and its mapping with the fields in policy schemas

    • Implement FLDs to enable users to view formulas in customLogicBlock calculations

    • Design parameter validation workflows using FLD patterns

    • Recognize opportunities for FLD optimization in VM0033 calculations

    Prerequisites

    • Completed Chapter 18: Custom Logic Block Development

    • Understanding of parameter dependencies from Part II: Analysis and Planning

    • Familiarity with VM0033 calculation structure from er-calculations.js

    Building Formula Linked Definitions

    When navigating to the "Manage Formulas" from the sidebar in Guardian, you can choose to create a new formula or import the formula using the .formula file. For this documentation we will look at creating a new formula (FLD) from scratch.

    Once you click on create a new formula, you will see three tabs:

    Overview Tab

    In this tab, you would put in basic details about your formula such as name, description and the policy it belongs to.

    Edit Formula

    There are 4 types of items available in order to compose a formula:

    • Constants are the fixed values that can be used in a formula. This item contains three fields where you can fill constant's:

      • Name

      • Description

      • Value

    • Variables are going to be the data coming in from the documents. This can be linked to a particular field in the schemas of the policy or a component of another FLD formula. Along with the name and description, this item also has a

      • Link (Input) field where the particular field from the schemas/component from other forumlas (FLDs) can be added.

    • Formulas item can be used to input the Mathematical Formula. Along with name and description fields, formula item also has

      • Formula field where the Mathematical formula can be added with the built in Math keyboard or LaTex form.

      • Link (Output) field which indicates the field in the document schema where the result of the calculation defined in CustomLogicBlock is located

    • Text a component which allows the description of the calculation algorithm without using mathematical notation. This component does not require any specific syntax. Text item contains the following fields:

      • Name of the text

      • Description of the text

    Using the combination of the above 4 items, a Formula Linked Definitions can be generated which will explain the code/calculations that happen in the CustomLogicBlock. The best approach is to go from bottom to top i.e. create all the small formulas and variables/constants it is related to and then work you way up to create the final formula that represents the Main Formula of the methodology. A formula item can be used inside another formula which will create a heirarchy for the end users to track how each component is being calculated.

    In order to have better readability, it is recommended to add relevant name and descriptions for the above items.

    Attach Files

    Here you can attach all the relevant documents concerned with the Methodology that can help with the verification of the Formulas. This will help the users (vvb, auditors etc.) to be able to look up the documents in guardian itself instead of finding it on the Internet. The files that are attached will be shown to the users in Files tab when the published document is viewed (refer to Viewing Formula Linked Definitions)

    Viewing Formula Linked Definitions

    Once the policy and the formulas are published, all the relevant document (VC) will have a button besides the linked fields to view the FLD. Once clicked, the Formula display dialogue shows all linked formulas and provides facilities to navigate through the components of these formulas. In the dialog, all the relationships that were added can be seen along with its value that was filled by the user. This makes the verification of the calculations and formulas easier.

    Along with the formulas, there will be a Files tab which will show all the files attached by the FLD developer (usually the policy developer)

    Chapter Summary

    Formula Linked Definitions provide a structured approach to managing parameter relationships in Guardian methodologies so that the users can cross-verify that the formulas used and the calculations behind the scenes (CustomLogicBlock) is correct.

    Key takeaways:

    • FLDs enable users to view human readable mathematical representations of the calculations taking place in the CustomLogicBlock

    • VM0033 offers clear examples of parameter relationships suitable for FLD implementation

    • FLDs allows to browse associations between fields in schemas/documents and the corresponding variables in the displayed math formulas.

    • Guardian platform allows users to navigate the hierarchy of formulas and the data they represent, and view mapping variables in the formula to fields in schemas.

    Next Steps

    Chapter 20 will demonstrate implementing specific AR Tool calculation patterns, showing how the parameter relationships we've identified in FLDs translate into working calculation code for biomass and soil carbon assessments.

    References and Further Reading

    • Guardian customLogicBlock Documentation

    • VM0033 Calculation Implementation

    • VM0033 Test Case Artifacts


    Chapter 8: Schema Architecture and Foundations

    Guardian's schema system is more sophisticated than simple data collection forms. When implementing VM0033, we needed to translate over 400 structured data components for wetland restoration methodology requirements into Guardian's schema architecture. This required understanding how schemas integrate with Guardian's broader platform capabilities while maintaining usability for different stakeholder types.

    This chapter demonstrates schema development foundations using VM0033 implementation as a concrete example. VM0033's complexity provides practical examples of architectural patterns, design principles, and implementation approaches that apply to environmental methodology digitization more broadly.

    The schema architecture establishes the foundation for translating methodology requirements from Part II analysis into working Guardian data structures. Rather than building everything at once, establishing architectural understanding first enables building schemas that handle complexity while remaining practical for real-world use.

    VM0033 Schemas

    Guardian Schema System Foundation

    Guardian schemas serve multiple functions beyond data collection. They define data structures, generate user interfaces, implement validation rules, support calculation frameworks, and create audit trails through Verifiable Credentials integration.

    Guardian Schema Functions:

    • Data Structure Definition: Specify exactly what information gets collected and how it's organized

    • User Interface Generation: Automatically create forms that stakeholders use for data input

    • Validation Rule Implementation: Ensure data meets methodology requirements before acceptance

    • Calculation Framework Support: Provide data structures that calculation logic operates on

    VM0033 demonstrates how these functions work together. The methodology's complex calculation requirements needed schemas that could capture parameter data accurately, generate usable interfaces for Project Developers and VVBs, validate data according to VM0033 specifications, and support calculation workflows for emission reduction quantification.

    JSON Schema Integration: Guardian builds on JSON Schema specifications for data structure definitions. Every parameter identified in Part II analysis translates into JSON Schema field definitions with appropriate types, validation rules, and relationships.

    Verifiable Credentials Structure: Each schema generates Verifiable Credentials (VCs) that create cryptographic proof of data integrity. For VM0033, this means every project submission, monitoring report, and verification result becomes an immutable record with full audit trail capabilities.

    Schema Content Classifications

    Guardian organizes schema content into five distinct types, each serving different purposes in methodology digitization. VM0033 uses all five types across its schema implementation:

    general-data: Basic project information, stakeholder details, geographic data, and descriptive content that doesn't require complex validation. VM0033's project description sections use general-data for project locations, implementation schedules, and stakeholder consultation results.

    parameter-data: Methodology-specific parameters with equations, units, data sources, and justifications. These components implement the mathematical framework from Part II analysis. VM0033's parameter-data includes biomass density values, emission factors, and quantification approach selections.

    validation-data: Calculation results, emission reduction outcomes, and verification results that require special audit trail handling. VM0033's validation-data captures final carbon stock calculations, emission reduction totals, and VVB verification decisions.

    tool-integration: External tool implementations including AR Tools, VCS modules, and methodology-specific calculation frameworks. VM0033 integrates AR Tool 5 for fossil fuel emissions and AR Tool 14 for biomass calculations through tool-integration components.

    guardian-schema: Complex nested schemas and advanced Guardian features requiring sophisticated configuration. VM0033's monitoring period management and multi-year calculation tracking use guardian-schema features for handling temporal data relationships.

    This classification system helps organize complex methodologies like VM0033 while ensuring each component uses appropriate Guardian features and validation approaches.

    Two-Part Schema Architecture

    For VM0033 we implemented a two-part schema structure that separates project description from calculation implementation. This pattern worked because methodologies have foundational project information that establishes context, and calculation machinery that processes that information into emission reduction or removal results.

    Project Description Foundation

    The Project Description schema establishes all foundational project information while supporting multiple certification pathways. For VM0033, this meant supporting both VCS-only projects and VCS+CCB projects through conditional logic that adapts the interface based on certification selection.

    Core Project Description Components:

    • Project Metadata: Title, location, timeline, proponent information, and basic project characterization

    • Certification Pathway Management: Conditional logic supporting VCS v4.4 requirements and optional CCB benefits documentation

    • Stakeholder Information: Project developer details, VVB assignments, and community consultation documentation

    • Methodology Implementation

    VM0033's Project Description schema contains 3,779 rows of structured data. This demonstrates how complex environmental methodologies require extensive information capture while maintaining usability for stakeholder workflows.

    Why This Foundation Approach Works: Establishing clear project context before calculations helps stakeholders understand what they're implementing and why. The foundation information also provides the context that calculation engines need to process parameters correctly.

    Calculations and Parameter Engine

    The Calculations section implements VM0033's computational requirements through structured parameter management and automated calculation workflows. This architecture handles the recursive calculation dependencies identified during Part II analysis.

    Calculation Engine Components:

    Monitoring Period Inputs: Time-series data collection framework with 47 structured fields handling annual data requirements across 100-year crediting periods. This component manages the temporal aspects of VM0033's monitoring requirements.

    Annual Input Parameters: Year-specific parameter tracking with 44-50 configured fields supporting VM0033's requirement for annual updates to key variables like biomass density, emission factors, and area measurements.

    Baseline Emissions Calculation: 204-field calculation engine implementing VM0033's baseline scenario quantification including soil carbon stocks, biomass calculations, and greenhouse gas emissions across all relevant carbon pools.

    Project Emissions Calculation: 196-203 field calculation framework processing project scenario emissions with restoration activity impacts, modified emission factors, and project-specific boundary conditions.

    Net ERR Calculation: 21-field validation engine that processes baseline and project calculations into final emission reduction results, including leakage accounting, uncertainty deductions, and buffer requirements.

    This calculation architecture handles VM0033's complex dependencies where final results depend on annual calculations, which depend on monitoring data, which depend on project-specific parameters established in the Project Description foundation.

    Guardian Field Mapping Patterns

    Translating methodology parameters into Guardian field configurations requires patterns that preserve methodology integrity while generating usable interfaces. VM0033's implementation established consistent approaches for different types of methodology content.

    Standard Parameter Field Structure

    Every methodology parameter from Part II analysis translates into Guardian fields using a consistent structure that captures all necessary information for implementation and validation.

    Required Parameter Fields:

    • Description: Clear explanation of what the parameter represents and how it's used in methodology calculations

    • Unit: Measurement units (t CO2e, hectares, percentage) matching methodology specifications exactly

    • Equation: Reference to specific methodology equations where the parameter appears

    • Source of data: Methodology requirements for how this parameter should be determined

    For example, VM0033's BD (Biomass Density) parameter implementation:

    This pattern ensures that every parameter implementation maintains full methodology traceability while providing clear guidance for data collection and validation.

    Conditional Logic Implementation Patterns

    VM0033's multiple calculation pathways required conditional logic that shows relevant fields based on user selections while maintaining methodology coverage.

    Conditional Logic Examples from VM0033:

    Certification Type Selection:

    • Selecting "VCS v4.4" shows core VCS requirements

    • Selecting "VCS + CCB" adds community and biodiversity benefit documentation requirements

    • Each pathway maintains methodology compliance while avoiding unnecessary complexity

    Quantification Approach Selection:

    • "Direct method" shows field measurement data entry forms

    • "Indirect method" shows estimation parameter inputs

    • Each method implements VM0033's approved calculation approaches

    Soil Emission Calculation Selection:

    • CO2 approach selection determines which soil carbon stock calculation methods appear

    • CH4 and N2O approach selections control emission factor parameter visibility

    • Each combination implements VM0033's flexible calculation framework

    This conditional structure ensures users see only methodology-relevant fields based on their project characteristics, reducing complexity while ensuring requirements coverage.

    UX Patterns

    Progressive Disclosure: Complex calculation parameters appear only after basic project information completion. This prevents overwhelming initial experiences while ensuring users understand project context before diving into technical details.

    Role-Based Interface: Different stakeholder roles see appropriate field sets:

    • Project Developers see data entry requirements with guidance

    • VVBs see verification-focused interfaces with tabs for validation & verification reports

    • Standard Registry sees approval-focused documentation with key decision points highlighted

    Contextual Help: We're working on a new feature to enable field-level methodology references, calculation explanations and source justifications in Guardian schemas.

    Validation Checks: Real-time validation feedback helps users understand data requirements and correct issues immediately rather than discovering problems during submission review.

    Next Steps

    This chapter established the architectural foundation for Guardian schema development using patterns demonstrated through VM0033's production implementation. The two-part architecture, field mapping patterns, and other techniques provide the framework for implementing granular data collection effectively.

    The next chapter applies these principles to PDD schema development, demonstrating how to implement project description requirements and calculation frameworks using the patterns and techniques established here.

    Chapter 6: Tools and Modules Integration

    One of the most challenging aspects of VM0033 digitization was handling the external calculation tools that the methodology references. These aren't just simple formulas - they're complete calculation systems developed by other organizations with their own parameter requirements, validation rules, and output formats. This chapter shares our experience integrating the three tools we implemented: AR-Tool05 for fossil fuel emissions, AR-Tool14 for biomass calculations, and the AFLOU non-permanence risk tool.

    The integration challenge went beyond just implementing calculations. Each tool was designed as a standalone system, but we needed to make them work seamlessly within VM0033's calculation framework while maintaining their original logic and validation requirements. The approach we developed balances faithful implementation of tool requirements with practical usability in the Guardian platform.

    Understanding External Tool Dependencies

    When we first analyzed VM0033, we found references to numerous CDM tools and VCS modules scattered throughout the methodology. Initially, this seemed overwhelming - how could we possibly implement all these external systems? The recursive analysis from Chapter 5 helped us understand which tools were actually needed for our mangrove restoration focus.

    VM0033's Tool References: The methodology mentions over a dozen external tools, but our boundary condition analysis revealed that the Allcot ABC Mangrove project only required three:

    • AR-Tool05: For calculating fossil fuel emissions from project activities

    • AR-Tool14: For estimating carbon stocks in trees and shrubs

    • AFLOU Non-permanence Risk Tool: For assessing project risks that might reverse carbon benefits

    Why Only These Three: The Allcot project boundary decisions eliminated the need for most other tools. No fire reduction premium meant no fire-related tools. Mineral soil only meant no peat-specific calculations. Simple planting activities meant minimal fossil fuel calculations.

    Tool Integration Strategy: Rather than trying to implement complete standalone versions of each tool, we focused on integrating the specific calculation procedures that VM0033 actually uses from each tool.

    Reference Materials: For tool integration context, see our and in our Artifacts Collection. The contains real project data for validation (covered in Chapter 7).

    Tool vs. Methodology Calculations

    Distinguishing Tool Logic from Methodology Logic: VM0033 uses tool calculations as components within its larger framework. For example, AR-Tool14 calculates biomass for a single tree or plot, but VM0033 scales this across multiple strata and time periods. Understanding this distinction helped us design integration that preserves tool accuracy while meeting methodology requirements.

    Data Flow Management: Each tool expects inputs in specific formats and produces outputs that need to be transformed for use in VM0033 calculations. We had to map data flows carefully to ensure information passes correctly between tool calculations and methodology calculations.

    AR-Tool05: Fossil Fuel Emission Calculations

    AR-Tool05 handles emissions from fossil fuel use in project activities. Even though the Allcot project excludes fossil fuel emissions (mangrove planting doesn't require heavy machinery), we implemented this tool because it's commonly needed in other restoration projects.

    Tool Purpose: AR-Tool05 provides standardized approaches for calculating CO₂ emissions from equipment, vehicles, and energy use during project implementation. This includes direct fuel combustion and indirect emissions from electricity consumption.

    Integration Challenge: AR-Tool05 is designed as a comprehensive energy accounting system, but VM0033 only needs specific emissions calculations. We had to extract the relevant calculation procedures while maintaining the tool's validation logic.

    Key Calculation Components We Implemented:

    Direct Combustion Emissions: Calculate CO₂ from fuel burned in vehicles and equipment using fuel consumption data and standard emission factors.

    Equipment-Specific Calculations: Different equipment types (boats, trucks, generators) have different fuel consumption patterns and emission factors that the tool accounts for systematically.

    Activity-Based Scaling: The tool calculates emissions per activity (hours of operation, distance traveled, area covered) which VM0033 then scales across project implementation schedules.

    AR-Tool05 Implementation Approach

    Simplified Parameter Collection: Instead of implementing AR-Tool05's complete equipment catalog, we focused on the equipment types commonly used in mangrove restoration: boats for site access, small equipment for planting, and vehicles for transportation.

    Validation Logic: AR-Tool05 includes validation rules for fuel consumption rates and emission factors. We preserved this validation because it catches data entry errors that could significantly affect results.

    Output Integration: AR-Tool05 produces total CO₂ emissions that get added to VM0033's project emission calculations. The integration required unit conversions and time period alignment with VM0033's annual calculation cycles.

    AR-Tool14: Biomass and Carbon Stock Calculations

    AR-Tool14 is central to mangrove restoration because it provides the standardized allometric equations for calculating carbon storage in trees and shrubs. This tool became one of our most important integrations because it directly affects the project's carbon benefit calculations.

    Tool Purpose: AR-Tool14 contains allometric equations that estimate biomass from tree measurements (diameter, height, species). These equations were developed from extensive field research and provide standardized approaches for different forest types and species groups.

    Why This Tool Matters: Without AR-Tool14, every project would need to develop its own biomass equations, which is expensive and time-consuming. The tool provides scientifically validated equations that are accepted by carbon standards worldwide.

    VM0033 Integration Points: VM0033 uses AR-Tool14 calculations in several places:

    • Baseline biomass estimation for existing vegetation

    • Project biomass growth projections over time

    • Above-ground and below-ground biomass calculations

    • Dead wood and litter biomass when included

    AR-Tool14 Implementation Details

    Species-Specific Equations: AR-Tool14 includes different allometric equations for different species groups. For mangrove restoration, we implemented equations specific to tropical wetland species that match the restoration targets in the Allcot project.

    Multi-Component Calculations: The tool calculates separate estimates for above-ground biomass, below-ground biomass, dead wood, and litter. VM0033 uses these component estimates in different parts of its calculation framework.

    Growth Projection Logic: AR-Tool14 provides approaches for projecting biomass growth over time using diameter increment data. This became critical for VM0033's long-term carbon benefit projections.

    Parameter Requirements We Mapped:

    • Tree diameter at breast height (DBH) measurements

    • Tree height measurements for species without height-specific equations

    • Species identification or species group classification

    • Site condition factors (soil type, climate region, management intensity)

    Handling AR-Tool14 Complexity

    Equation Selection Logic: AR-Tool14 contains dozens of allometric equations for different species and conditions. We implemented selection logic that chooses appropriate equations based on user-provided species and site information.

    Unit Management: The tool uses various units for different equations (DBH in cm, height in m, biomass in kg or tons). Our implementation handles unit conversions automatically to prevent errors.

    Validation and Error Handling: AR-Tool14 includes validation rules for measurement ranges and species applicability. We preserved these validations because they prevent calculation errors from invalid input data.

    AFLOU Non-Permanence Risk Assessment

    The AFLOU (Agriculture, Forestry, and Other Land Use) non-permanence risk tool assesses the likelihood that carbon benefits might be reversed due to various risk factors. This tool was essential for VM0033 because it determines buffer pool contributions that affect final credit calculations.

    Tool Purpose: AFLOU evaluates project risks across multiple categories (natural disasters, management failures, political instability, economic factors) and calculates a risk score that determines what percentage of credits must be held in buffer pools.

    Why Risk Assessment Matters: Carbon projects can lose stored carbon through storms, fires, disease, or management changes. The AFLOU tool provides standardized risk assessment that ensures projects contribute appropriately to insurance buffer pools.

    Integration with VM0033: VM0033 uses AFLOU risk scores to calculate buffer pool contributions that reduce the net credits a project can claim. Higher risk scores mean higher buffer contributions and fewer credits available for sale.

    AFLOU Implementation Approach

    Risk Category Assessment: AFLOU evaluates risks across multiple standardized categories. For mangrove restoration, the most relevant categories include:

    • Natural disturbance risks (storms, sea level rise, disease)

    • Management and financial risks (funding stability, technical capacity)

    • Market and political risks (land tenure, regulatory changes)

    Scoring Integration: AFLOU produces risk scores that feed into VM0033's buffer pool calculations. We implemented the scoring logic while simplifying the user interface to focus on risks most relevant to mangrove restoration.

    Project-Specific Customization: The tool allows project-specific risk assessments based on local conditions. Our implementation guides users through risk evaluation while maintaining consistency with AFLOU's standardized approaches.

    Creating Unified Integration Framework

    Rather than implementing three separate tools, we designed a unified integration framework that manages data flows between tools and VM0033 calculations while maintaining each tool's specific requirements.

    Shared Parameter Management: Many parameters are used by multiple tools. For example, tree species information affects both AR-Tool14 biomass calculations and AFLOU risk assessments. Our framework ensures parameter consistency across tool integrations.

    Calculation Sequencing: Some tool calculations depend on outputs from other tools. Our framework manages calculation sequences to ensure data is available when needed while handling dependencies gracefully.

    Validation Coordination: Each tool has its own validation requirements, but some validations overlap or conflict. We designed validation logic that satisfies all tool requirements while providing clear feedback to users about any issues.

    Framework Benefits

    Consistent User Experience: Users interact with a single interface that handles all tool integrations rather than switching between different tool interfaces.

    Data Quality Assurance: The unified framework ensures data consistency across all tool calculations and catches errors that might arise from parameter mismatches between tools.

    Maintenance Efficiency: Updates to tool calculations or requirements can be managed in one place rather than updating multiple separate integrations.

    Practical Integration Lessons

    Start with Core Functionality: Our initial approach tried to implement complete tool functionality, which was overwhelming. It worked much better to start with the specific functions VM0033 actually uses and expand from there.

    Preserve Tool Validation: Each tool's validation logic exists for good reasons - usually to prevent calculation errors or inappropriate application. Preserving this validation prevented problems during implementation and ongoing use.

    Plan for Tool Updates: CDM tools and VCS modules get updated periodically. We designed our integration to accommodate updates without requiring complete reimplementation.

    Test with Known Results: Each tool typically includes example calculations or test cases. We used these to validate our integration implementation before connecting it to VM0033 calculations.

    Document Integration Decisions: When tools provide multiple calculation options, we documented which options we implemented and why. This helped with maintenance and troubleshooting later.

    Integration Testing and Validation

    Tool-Level Testing: We first tested each tool integration separately using the tool's own test cases and examples to ensure calculation accuracy.

    VM0033 Integration Testing: After individual tool testing, we tested the complete integration using VM0033 calculation examples to ensure data flows correctly through the full calculation chain.

    Cross-Tool Consistency: We tested scenarios where multiple tools use the same input parameters to ensure consistent results and catch parameter handling errors.

    Edge Case Testing: Each tool handles edge cases (unusual measurements, boundary conditions) differently. We tested these scenarios to ensure graceful handling across the integrated system.

    From Tool Integration to Test Artifacts

    The tool integration work creates the foundation for comprehensive test artifact development in Chapter 7. Understanding how tools connect to VM0033 calculations enables creating test scenarios that validate not just methodology calculations, but also the integration points where tools provide inputs to methodology calculations.

    Test Coverage Requirements: Tool integrations add complexity that must be covered in test artifacts. Tests need to validate tool calculations individually and integration points where tools connect to methodology calculations.

    Parameter Coverage: Tools introduce additional parameters that must be included in test scenarios. The parameter mapping work from tool integration directly informs test artifact parameter requirements.

    Validation Testing: Tool validation logic must be tested to ensure it properly prevents calculation errors without blocking valid parameter combinations.


    Tool Integration Summary and Next Steps

    Integration Framework Complete: You now understand the approach we used to integrate external calculation tools into VM0033 digitization.

    Key Integration Outcomes:

    Preparation for Chapter 7: The tool integration work provides essential components for test artifact development. The parameter requirements, calculation procedures, and validation logic from tool integration become key elements in comprehensive test scenarios.

    Real-World Application: While we focused on three specific tools for the Allcot mangrove project, the integration approach applies to any external calculation tools referenced by environmental methodologies. The unified framework approach scales to handle additional tools as project requirements expand.

    Implementation Reality: Tool integration took significant time during VM0033 digitization, but it provides reusable calculation capabilities that can be applied to other projects using the same tools.

    Part III: Schema Design and Development

    Practical schema development using Excel-first approach and Guardian's schema management features

    Part III transforms your methodology analysis from Part II into working Guardian schemas through hands-on, step-by-step implementation. Using VM0033 as a concrete example, this section teaches practical schema development from architectural foundations through testing and validation.

    The five chapters follow a logical progression: Guardian schema basics → PDD schema development → monitoring schema development → advanced schema management techniques → practical testing checklist.

    Schema Development Approach

    Part III focuses on practical schema development using proven patterns from VM0033 implementation. Rather than theoretical concepts, each chapter provides step-by-step instructions for creating working schemas that capture methodology requirements accurately.

    Development Sequence:

    1. Schema Architecture Foundations (Chapter 8): Guardian schema system basics and field mapping principles

    2. PDD Schema Development (Chapter 9): Approach to building comprehensive PDD schemas step-by-step

    3. Monitoring Schema Development (Chapter 10): Time-series monitoring schemas with temporal data management

    4. Advanced Schema Techniques (Chapter 11): API schema management, field properties, Required types, and UUIDs

    This hands-on approach ensures you can build production-ready schemas while understanding Guardian's schema management capabilities.

    Chapter Progression and Learning Objectives

    Focus: Guardian schema system fundamentals and the two-part architecture pattern used in VM0033.

    What You'll Learn: Guardian's JSON Schema integration, Verifiable Credentials structure, and the proven two-part architecture (Project Description + Calculations) that handles methodology complexity. You'll understand how to map methodology parameters to Guardian field types.

    Practical Skills: Field type selection, parameter mapping, and architectural patterns that simplify complex methodologies into manageable schema structures.

    Focus: Step-by-step Excel-first approach to building comprehensive PDD schemas.

    What You'll Learn: Complete PDD schema development process from Excel template through Guardian import. Includes conditional logic implementation, sub-schema creation, and essential field key management for calculation code readability.

    Practical Skills: Excel schema template usage, Guardian field configuration, conditional visibility logic, and proper field key naming for maintainable calculation code.

    Focus: Time-series monitoring schemas that handle annual data collection and calculation updates.

    What You'll Learn: Monitoring schema development with temporal data structures, quality control fields, and evidence documentation. Covers field key management specific to time-series calculations and VVB verification workflows.

    Practical Skills: Annual parameter tracking, temporal data organization, monitoring-specific field key naming, and verification support structures.

    Focus: API schema management, standardized properties, Required field types, and UUID management.

    What You'll Learn: Schema management with API operations, the four Required field types (None/Hidden/Required/Auto Calculate), standardized property definitions from GBBC specifications, and UUID management for efficient development.

    Practical Skills: API schema updates, Auto Calculate field implementation, standardized property usage, and UUID-based schema version management.

    Focus: Practical validation steps using Guardian's testing features before schema deployment.

    What You'll Learn: Systematic testing approach using Default Values, Suggested Values, and Test Values. Covers schema preview testing, UUID integration into policy workflows, and user experience validation.

    Practical Skills: Guardian schema testing tools usage, validation rule configuration, logical field organization, and pre-deployment checklist completion.

    Building on Part II Foundation

    Part III directly implements the analysis work from Part II. Your methodology decomposition, parameter identification, and test artifacts become the inputs for schema development.

    Implementation Translation: The parameter lists, dependency trees, and calculation frameworks from Part II translate directly into Guardian schema configurations through the techniques taught in Part III.

    Test Integration: Test artifacts from Chapter 7 integrate with schema testing in Chapter 12, ensuring implementations maintain accuracy while providing good user experience.

    Part III Completion

    Completing Part III provides you with:

    • Production-ready PDD and monitoring schemas for your methodology

    • Guardian schema development skills transferable to other methodologies

    • Understanding of schema testing and validation best practices

    • Schema management techniques for efficient development and maintenance

    Preparation for Part IV: The schemas created in Part III integrate directly with Guardian policy workflow blocks. Your data structures and validation rules become the foundation for complete methodology automation.

    Time Investment

    Each chapter requires approximately 15-25 minutes reading plus 30-60 minutes hands-on practice:

    • Chapter 8: 20 min reading + 30 min practice (architectural understanding)

    • Chapter 9: 25 min reading + 60 min practice (comprehensive PDD schema development)

    • Chapter 10: 20 min reading + 45 min practice (monitoring schema development)

    • Chapter 11: 25 min reading + 45 min practice (advanced techniques)

    Total Investment: ~3-4 hours for complete schema development capabilities


    Chapter Navigation

    Chapter
    Title
    Focus
    Reading Time
    Practice Time

    Ready to Begin: With Part II analysis complete, you're prepared for hands-on schema development. Start with Chapter 8 for Guardian schema system foundations.

    Part IV: Policy Workflow Design and Implementation

    Building complete Guardian policies using your schemas from Part III

    Part IV transforms your schemas from Part III into working Guardian policies that automate complete certification workflows. You'll learn Guardian's Policy Workflow Engine by building on VM0033's production policy, creating stakeholder workflows, and implementing token minting based on verified emission reductions/removals.

    The five chapters progress logically: policy architecture understanding → workflow block configuration → VM0033 implementation deep dive → advanced patterns → testing and deployment.

    Policy Development Approach

    Part IV uses VM0033's complete policy implementation as your guide. You'll see how real production policies handle Project Developer submissions, VVB verification, and Standard Registry oversight through Guardian's workflow blocks.

    Development Sequence:

    1. Policy Architecture and Design Principles (Chapter 13): Guardian PWE fundamentals and integration with Part III schemas

    2. Guardian Workflow Blocks and Configuration (Chapter 14): Step-by-step configuration of Guardian's 25+ workflow blocks

    3. VM0033 Policy Implementation Deep Dive (Chapter 15): Complete analysis of VM0033's production policy patterns

    4. Advanced Policy Patterns and Testing (Chapter 16): Multi-methodology support, testing strategies, and security patterns

    This hands-on approach ensures you can build production-ready policies that handle real-world methodology requirements.

    Chapter Progression and Learning Objectives

    Focus: Guardian Policy Workflow Engine basics and integration with Part III schemas.

    What You'll Learn: Guardian's workflow block system, event-driven architecture, and how to connect your schemas to policy automation. You'll understand stakeholder roles, permissions, and document flow patterns using VM0033's implementation.

    Practical Skills: Policy architecture design, schema UUID integration, role-based access control, and workflow planning for methodology certification processes.

    Focus: Step-by-step configuration of Guardian's workflow blocks for data collection, calculations, and token management.

    What You'll Learn: Complete guide to Guardian's 25+ workflow blocks including data input blocks (requestVcDocumentBlock), calculation blocks (customLogicBlock), and token blocks (mintDocumentBlock). Each block is explained with VM0033 configuration examples.

    Practical Skills: Workflow block configuration, form generation from schemas, calculation logic implementation, and token minting rule setup.

    Focus: Complete analysis of VM0033's production policy with 37 schemas and 2 AR Tools.

    What You'll Learn: How VM0033 implements Project Developer submission workflows, VVB verification processes, and Standard Registry oversight. You'll trace the complete flow from PDD submission to VCU token issuance using real policy configurations.

    Practical Skills: Multi-stakeholder workflow design, document state management, verification workflows, and production policy patterns.

    Focus: Multi-methodology support, comprehensive testing strategies, and production-grade security patterns.

    What You'll Learn: Advanced policy architecture including multi-methodology integration, external data sources, comprehensive testing frameworks, and security implementations. You'll see how to optimize policies for performance and handle complex methodology requirements.

    Practical Skills: Multi-methodology pattern design, policy testing automation, performance optimization, external API integration, and security implementation.

    Focus: Production deployment strategies, monitoring, and operational excellence for Guardian policies.

    What You'll Learn: Production deployment architecture, monitoring and alerting systems, incident response procedures, cost optimization, and stakeholder management for live policy operations.

    Practical Skills: Production deployment configuration, monitoring setup, incident response planning, cost management, and policy lifecycle management.

    Building on Part III Foundation

    Part IV directly implements your schemas from Part III. Your schema UUIDs become references in policy workflow blocks, your field keys become calculation variables, and your validation rules become workflow automation.

    Implementation Translation:

    • Part III PDD schema → requestVcDocumentBlock for project submission

    • Part III monitoring schema → requestVcDocumentBlock for monitoring reports

    • Schema field keys → customLogicBlock calculation variables

    • Schema validation rules → documentValidatorBlock configurations

    Direct Integration: VM0033 shows exactly how schemas integrate with policy workflows, providing concrete examples for your methodology implementation.

    Practical Implementation Focus

    Part IV emphasizes real-world policy development:

    • VM0033 Production Policy: Complete policy with 37 schemas extracted and analyzed

    • Stakeholder Workflows: Project_Proponent, VVB, and OWNER role implementations

    • Event-Driven Architecture: Real triggers, state changes, and workflow coordination

    • Token Minting Integration: From emission reduction calculations to VCU issuance

    Part IV Completion

    Completing Part IV provides you with:

    • Complete Guardian policy implementing your methodology

    • Multi-stakeholder workflows with proper access control

    • Token minting based on verified emission reductions

    • Production deployment and maintenance capabilities

    Ready for Production: Your methodology will be fully automated on Guardian with proper stakeholder workflows, audit trails, and token management.

    Time Investment

    Each chapter requires approximately 20-30 minutes reading plus 45-90 minutes hands-on practice:

    • Chapter 13: 25 min reading + 60 min practice (policy architecture and planning)

    • Chapter 14: 30 min reading + 90 min practice (workflow block configuration)

    • Chapter 15: 25 min reading + 75 min practice (VM0033 implementation analysis)

    • Chapter 16: 30 min reading + 60 min practice (advanced patterns and integration)

    Total Investment: ~5-6 hours for complete policy development capabilities


    Chapter Navigation

    Chapter
    Title
    Focus
    Reading Time
    Practice Time

    Policy Development Path: Follow chapters sequentially to build from basic policy understanding to complete production deployment.

    Ready to Begin: With Part III schemas complete, you're prepared for policy workflow development. Start with Chapter 13 for Guardian Policy Workflow Engine foundations.

    Guardian Integration

    Integration system for linking handbook content with existing Guardian documentation

    Overview

    This system ensures that handbook content properly references existing Guardian documentation from docs/SUMMARY.md rather than duplicating information, while maintaining focus on methodology digitization context.

    VM0033 Integration

    System for leveraging existing VM0033 documentation and requesting only Guardian-specific implementation details

    Overview

    This system ensures accurate VM0033 references by:

    1. Using existing parsed documentation in docs/VM0033-methodology-pdf-parsed/

    Chapter 28: Troubleshooting and Common Issues

    Practical tips and solutions for common problems encountered during methodology digitization

    This chapter provides informal, practical guidance for resolving common issues during Guardian methodology development. These tips come from real-world experience and can save significant development time.

    Schema Building Best Practices

    Field: "Project Area (hectares)"
    Default Value: 100
    Suggested Value: 500
    Test Value: 250
    1. Basic Project Info (title, developer, dates)
    2. Certification Path Selection (VCS/CCB)
    3. Methodology Selection (calculation methods)
    4. Method-Specific Parameters (conditional)
    5. Evidence Documentation
    6. Validation and Review

    Relationships field where you can add all the variables and constants that are related/used in the formula. This enables navigation in a Formula using its variables when the user is looking at the published formulas in the schemas/VC documents.

    Text that needs to be added
  • Link (Output) which indicates the field in the document schema where the text should be shown.

  • Relationships field where you can select all the variables, constants and formulas that are related.

  • Unified integration framework for consistent data management
  • parsed VM0033 methodology
    Python extraction tool
    VM0033 test artifact
    7
    Guardian Documentation Structure Analysis

    Based on docs/SUMMARY.md, the following Guardian documentation sections are relevant for methodology digitization:

    Core Architecture References

    Policy Workflow Engine References

    Schema System References

    User Management References

    Installation and Setup References

    Integration Patterns

    Environment Setup Integration Pattern

    Methodology Understanding Integration Pattern

    Platform Overview Integration Pattern

    Reference Integration Templates

    Documentation Link Template

    Cross-Reference Template

    Content Integration Guidelines

    What to Link vs. What to Explain

    Always Link (Don't Duplicate)

    • Guardian installation procedures

    • Complete API documentation

    • Comprehensive feature explanations

    • Technical architecture details

    • User interface guides

    Provide Methodology Context For

    • How Guardian features apply to methodology digitization

    • VM0033-specific implementation examples

    • Methodology developer workflow considerations

    • Integration points between Guardian and methodology requirements

    Integration Quality Checklist

    Maintenance Procedures

    Link Validation

    Documentation Sync Process

    User Input Integration

    Guardian-Specific User Input Requirements


    Integration Success: This system ensures handbook content leverages existing Guardian documentation effectively while maintaining focus on methodology digitization and implementation.

    for basic methodology questions
  • Requesting user input only for Guardian-specific implementation details, screenshots, and current system status

  • Available VM0033 Documentation

    Parsed VM0033 Content

    The system can access comprehensive VM0033 methodology content from:

    • docs/VM0033-methodology-pdf-parsed/VM0033-Methodology.md - Full methodology text

    • docs/VM0033-methodology-pdf-parsed/VM0033-Methodology_meta.json - Structured metadata and table of contents

    What NOT to Ask Users

    Basic methodology information available in parsed docs:

    • VM0033 definitions and terminology

    • Applicability conditions and scope

    • Baseline scenario determination procedures

    • Carbon pools and GHG sources

    • Monitoring requirements and parameters

    • Mathematical formulas and calculations

    • Tool relationships (AR-Tool02, AR-Tool03, AR-Tool14)

    • Blue carbon significance and methodology overview

    • Temporal and geographic boundaries

    • Stratification requirements

    User Input Required For

    Guardian Implementation Details

    Development Environment & Setup

    Real Implementation Examples

    Content Integration Guidelines

    Using VM0033 Parsed Documentation

    For methodology content, reference the parsed documentation directly:

    Guardian Implementation Request Template

    Only use this template for Guardian-specific details:

    Content Validation System

    VM0033 Content Integration Checklist

    For each VM0033 reference:

    Guardian Integration Checklist

    For each Guardian reference:

    Implementation Guidelines

    Content Creation Process

    1. Check Parsed Documentation: First, check if VM0033 information is available in parsed docs

    2. Reference Methodology Content: Use parsed documentation for basic methodology details

    3. Identify Guardian Gaps: Determine what Guardian-specific information is needed

    4. Request Guardian Details: Use templates to request only Guardian implementation details

    5. Integrate Content: Combine methodology references with Guardian implementation

    6. Quality Check: Ensure no methodology assumptions or hallucinations

    Content Integration Examples

    Methodology Reference Pattern

    Quality Assurance

    Content Review Process

    1. Methodology Source Check: VM0033 content referenced from parsed documentation

    2. Guardian Input Validation: Guardian-specific details obtained from user input only

    3. Documentation Integration: Guardian references link to existing documentation

    4. Accuracy Check: No methodology assumptions or hallucinations

    5. Completeness Review: All Guardian implementation details obtained

    Error Prevention

    • Use Parsed Documentation: Always check VM0033 parsed docs before asking users

    • No Methodology Assumptions: Never assume or hallucinate VM0033 content

    • Guardian-Specific Requests: Only request Guardian implementation details from users

    • Source Attribution: Always reference specific VM0033 sections from parsed docs

    • Clear Boundaries: Distinguish between methodology content and Guardian implementation

    Common Mistakes to Avoid

    ❌ Wrong: Asking user "What does VM0033 say about blue carbon?" ✅ Right: Reference VM0033 parsed documentation for blue carbon definition

    ❌ Wrong: Asking user "What are VM0033 applicability conditions?" ✅ Right: Reference Section 4 of parsed VM0033 documentation

    ❌ Wrong: Assuming Guardian implementation details ✅ Right: Request specific Guardian screenshots and configurations from user

    Maintenance

    Ongoing Updates

    • VM0033 Changes: System for handling methodology updates

    • Guardian Updates: Process for updating Guardian references

    • User Feedback: Integration of user corrections and improvements

    • Documentation Sync: Keeping Guardian documentation references current

    Version Control

    • Content Versioning: Track changes to user-provided content

    • Reference Updates: Maintain current links to Guardian documentation

    • Accuracy Tracking: Monitor and update VM0033 references as needed


    Key Principle: Use existing VM0033 parsed documentation for methodology content. Only request Guardian-specific implementation details from users.

    Critical Requirement: Never ask users for basic VM0033 methodology information that's already available in the parsed documentation. This prevents unnecessary interruptions and ensures efficient content creation.

    ## Architecture Documentation
    - [Guardian Architecture](../../../guardian/architecture/README.md)
    - [Deep Dive Architecture](../../../guardian/architecture/reference-architecture.md)
    - [High Level Architecture](../../../guardian/architecture/architecture-2.md)
    - [Policies, Projects and Topics Mapping](../../../guardian/architecture/schema-architecture.md)
    ## Policy Workflow Documentation
    - [Available Policy Workflow Blocks](../../../guardian/standard-registry/policies/policy-creation/introduction/README.md)
    - [Policy Creation using UI](../../../guardian/standard-registry/policies/policy-creation/policy-demo.md)
    - [Policy Workflow Creation Guide](../../../guardian/standard-registry/policies/policy-creation/creating-a-policy-through-policy-configurator/README.md)
    ## Schema Documentation
    - [Available Schema Types](../../../guardian/standard-registry/schemas/available-schema-types.md)
    - [Schema Creation using UI](../../../guardian/standard-registry/schemas/creating-system-schema-using-ui.md)
    - [Schema APIs](../../../guardian/standard-registry/schemas/schema-creation-using-apis/README.md)
    - [Schema Versioning & Deprecation](../../../guardian/standard-registry/schemas/schema-versioning-and-deprecation-policy.md)
    ## User Management Documentation
    - [Multi-User Roles](../../../guardian/readme/environments/multi-session-consistency-according-to-environment.md)
    - [User Guide Glossary](../../../guardian/readme/guardian-glossary.md)
    ## Setup Documentation
    - [Installation Guide](../../../guardian/readme/getting-started/README.md)
    - [Prerequisites](../../../guardian/readme/getting-started/prerequisites.md)
    - [Building from Source](../../../guardian/readme/getting-started/installation/building-from-source-and-run-using-docker/README.md)
    - [Environment Parameters](../../../guardian/readme/getting-started/installation/setting-up-environment-parameters.md)
    ## Guardian Documentation Integration for Setup
    
    ### Development Environment Setup Section
    Instead of rewriting setup instructions:
    
    {% hint style="info" %}
    **Guardian Setup**: For complete Guardian platform setup instructions, see the [Installation Guide](../../../guardian/readme/getting-started/README.md).
    {% endhint %}
    
    **Methodology-Specific Setup Considerations**:
    - [User input required: Specific setup requirements for methodology development]
    - [User input required: Additional tools needed for methodology work]
    - [User input required: Environment configuration for methodology testing]
    
    **Quick Setup Validation**:
    1. Follow the [Prerequisites](../../../guardian/readme/getting-started/prerequisites.md) guide
    2. Complete [Building from Source](../../../guardian/readme/getting-started/installation/building-from-source-and-run-using-docker/README.md)
    3. Verify methodology development capabilities: [User input required]
    ## Guardian Documentation Integration for Methodology Context
    
    ### Methodology Domain Knowledge Context
    This content focuses on methodology understanding. For Guardian platform details, see:
    
    - [Guardian Architecture](../../../guardian/architecture/README.md) - How Guardian supports methodology implementation
    - [Policy Workflow Blocks](../../../guardian/standard-registry/policies/policy-creation/introduction/README.md) - Available blocks for methodology workflow
    - [Schema Types](../../../guardian/standard-registry/schemas/available-schema-types.md) - Data structures for methodology requirements
    
    **Methodology-Specific Context**: [User input required for methodology-specific domain knowledge]
    ## Guardian Documentation Integration for Platform Overview
    
    ### Architecture Overview Section
    {% hint style="info" %}
    **Detailed Architecture**: For comprehensive Guardian architecture documentation, see [Guardian Architecture](../../../guardian/architecture/README.md).
    {% endhint %}
    
    **Methodology Developer Focus**:
    This section highlights Guardian architecture aspects most relevant to methodology digitization:
    
    1. **Service Architecture for Methodologies**
       - [Link to detailed architecture docs](../../../guardian/architecture/reference-architecture.md)
       - [User input required: How methodologies use Guardian services]
    
    2. **Data Flow for Methodology Workflows**
       - [Link to data flow documentation](../../../guardian/architecture/schema-architecture.md)
       - [User input required: Methodology data flow examples]
    ## [Guardian Feature] for Methodology Development
    
    {% hint style="info" %}
    **Complete Documentation**: For full details on [Guardian Feature], see [Link to Guardian Docs](../../../guardian/path/to/docs.md).
    {% endhint %}
    
    **Methodology Context**: [How this feature applies to methodology digitization]
    
    **VM0033 Example**: [User input required: Specific VM0033 application]
    
    **Key Points for Methodology Developers**:
    - [Methodology-specific consideration 1]
    - [Methodology-specific consideration 2]
    - [Methodology-specific consideration 3]
    
    **Next Steps**: [How this prepares for methodology implementation]
    ## Related Guardian Documentation
    
    For deeper understanding of concepts covered in this section:
    
    ### Core Documentation
    - **[Feature Name]**: [Link](../../../guardian/path/to/docs.md) - [Brief description of relevance]
    - **[Feature Name]**: [Link](../../../guardian/path/to/docs.md) - [Brief description of relevance]
    
    ### API References
    - **[API Category]**: [Link](../../../guardian/path/to/api-docs.md) - [Relevance to methodology work]
    
    ### Advanced Topics
    - **[Advanced Feature]**: [Link](../../../guardian/path/to/advanced-docs.md) - [When this becomes relevant]
    ## Guardian Integration Quality Checklist
    
    For each Guardian reference:
    - [ ] Links to existing documentation rather than duplicating
    - [ ] Provides methodology-specific context
    - [ ] Explains relevance to VM0033 implementation
    - [ ] Maintains focus on methodology digitization
    - [ ] Includes user input requirements for examples
    - [ ] Validates links are current and functional
    #!/bin/bash
    # Validate Guardian documentation links in Part I
    
    echo "Validating Guardian documentation links..."
    
    # Check all Guardian documentation references
    find docs/methodology-digitization-handbook/part-1 -name "*.md" -exec grep -l "\.\.\/\.\.\/\.\.\/guardian\/" {} \; | while read file; do
        echo "Checking Guardian links in $file"
        # Extract and validate Guardian documentation links
        grep -o "\.\.\/\.\.\/\.\.\/guardian\/[^)]*" "$file" | while read link; do
            if [ ! -f "docs/guardian/${link#../../../guardian/}" ]; then
                echo "BROKEN LINK: $link in $file"
            fi
        done
    done
    
    echo "Guardian link validation complete"
    ## Guardian Documentation Sync Process
    
    ### Monthly Sync
    1. Review docs/SUMMARY.md for structural changes
    2. Validate all Guardian documentation links
    3. Update broken or moved references
    4. Check for new relevant documentation
    
    ### Quarterly Review
    1. Assess new Guardian features for methodology relevance
    2. Update integration patterns as needed
    3. Review user feedback on documentation usefulness
    4. Optimize cross-reference effectiveness
    
    ### Annual Assessment
    1. Comprehensive review of Guardian documentation integration
    2. Update integration templates and patterns
    3. Assess methodology developer needs evolution
    4. Plan integration improvements
    ## Guardian Implementation Details Requiring User Input
    
    ### Environment Setup Content
    - [ ] Current Guardian setup requirements for methodology development
    - [ ] Specific tools and configurations needed for methodology work
    - [ ] Guardian platform capabilities relevant to methodology digitization
    - [ ] Screenshots of current Guardian interface
    
    ### Methodology Understanding Content
    - [ ] How methodologies map to Guardian documentation structure
    - [ ] Specific Guardian features used in methodology implementation
    - [ ] Guardian workflow patterns relevant to methodologies
    
    ### Platform Overview Content
    - [ ] Current Guardian architecture screenshots
    - [ ] Methodology implementation details in Guardian
    - [ ] Specific Guardian capabilities used for methodologies
    - [ ] User interface examples from methodology implementations
    <!-- Example: Referencing VM0033 definitions -->
    According to VM0033 Section 3 (Definitions), a "Tidal Wetland" is defined as:
    [Reference: docs/VM0033-methodology-pdf-parsed/VM0033-Methodology.md]
    
    <!-- Example: Referencing applicability conditions -->
    VM0033 applicability conditions (Section 4) specify that projects must:
    [Reference: docs/VM0033-methodology-pdf-parsed/VM0033-Methodology.md]
    ## Guardian Implementation Detail Needed
    
    **Chapter**: [Chapter Number and Title]
    **Section**: [Specific Section]
    **Guardian Feature**: [Specific Guardian capability or implementation]
    
    **Required Information**:
    - [ ] Current implementation status in Guardian
    - [ ] Screenshots of Guardian interface
    - [ ] Configuration files or code examples
    - [ ] API endpoints or database schema
    - [ ] User workflow in Guardian system
    
    **Context**: How this Guardian implementation supports VM0033 methodology
    
    **Note**: Basic VM0033 methodology details will be referenced from parsed documentation
    <!-- CORRECT: Using parsed documentation for methodology content -->
    
    ## VM0033 Baseline Scenarios
    
    According to VM0033 Section 6.1 "Determination of the Most Plausible Baseline Scenario", 
    the methodology requires [specific requirements from parsed documentation].
    
    {% hint style="info" %}
    **Guardian Implementation**: The following shows how Guardian implements VM0033 baseline scenario determination.
    {% endhint %}
    
    [USER INPUT NEEDED: Guardian screenshots and configuration for baseline scenario implementation]
    
    <!-- INCORRECT: Asking user for basic methodology information -->
    [USER INPUT NEEDED: What are VM0033 baseline scenario requirements?]
    <!-- Standard pattern for referencing VM0033 content -->
    **VM0033 Reference**: Section [X.X] - [Section Title]
    **Source**: `docs/VM0033-methodology-pdf-parsed/VM0033-Methodology.md`
    **Content**: [Direct reference to methodology requirements]
    
    **Guardian Implementation**: 
    [USER INPUT NEEDED: How Guardian implements this VM0033 requirement]
  • Audit Trail Creation: Generate immutable records for every data submission and modification

  • : Project boundary definition, quantification approach selection, and baseline scenario establishment
  • Value applied: Actual parameter values, often with stratum-specific or project-specific breakdowns

  • Justification: Required explanation for parameter selection and data source choices

  • Schema Testing Checklist (Chapter 12): Practical validation steps using Guardian's testing features

  • Chapter 12: 15 min reading + 30 min practice (testing checklist)

    Monitoring Schema Development

    Time-series monitoring and field management

    ~20 min

    ~45 min

    Advanced Schema Techniques

    API management, Required types, UUIDs

    ~25 min

    ~45 min

    Schema Testing Checklist

    Practical validation and testing steps

    ~15 min

    ~30 min

    8

    Schema Architecture and Foundations

    Guardian schema basics and field mapping

    ~20 min

    ~30 min

    9

    PDD Schema Development

    PDD schema step-by-step

    ~25 min

    ~60 min

    Chapter 8: Schema Architecture and Foundations
    Chapter 9: Project Design Document (PDD) Schema Development
    Chapter 10: Monitoring Report Schema Development
    Chapter 11: Advanced Schema Techniques
    Chapter 12: Schema Testing and Validation Checklist

  • Policy Deployment and Production Management (Chapter 17): Production deployment, monitoring, and operational excellence

  • Production Deployment: Actual configuration and maintenance procedures

    Policy development skills transferable to other methodologies

    Chapter 17: 20 min reading + 45 min practice (testing and deployment)

    VM0033 Implementation Deep Dive

    Production policy analysis and patterns

    ~25 min

    ~75 min

    Advanced Policy Patterns and Testing

    Multi-methodology support and testing

    ~30 min

    ~60 min

    Policy Deployment and Production

    Production deployment and management

    ~20 min

    ~45 min

    13

    Policy Workflow Architecture

    Guardian PWE basics and schema integration

    ~25 min

    ~60 min

    14

    Workflow Blocks and Configuration

    Step-by-step block configuration guide

    ~30 min

    ~90 min

    Chapter 13: Policy Workflow Architecture and Design Principles
    Chapter 14: Guardian Workflow Blocks and Configuration
    Chapter 15: VM0033 Policy Implementation Deep Dive
    Chapter 16: Advanced Policy Patterns and Testing
    Chapter 17: Policy Deployment and Production Management

    Excel-First Schema Development

    Building complex schemas via Excel and importing them to Guardian is the fastest way to develop schemas, but there are important pitfalls to avoid:

    ⚠️ Guardian Duplicate Schema Issue Guardian doesn't distinguish between duplicate schemas during import and will create duplicates if the same schema is imported twice. This is especially problematic when teams make small adjustments to Excel schemas and are tempted to re-import the entire file.

    Solution: Track schema versions carefully and delete duplicates manually when they occur. Consider maintaining a schema change log to avoid confusion.

    Field Key Names from Excel Import

    Issue: Key names of fields imported via Excel aren't human-readable by default. They appear as generic identifiers that make calculation code difficult to maintain.

    Solution: Modify field keys manually after import:

    1. Go to the schema's Advanced tab

    2. Edit the Excel cell IDs in the key field

    3. Use descriptive names that match your calculation variables

    Required Fields and Auto-Calculate Pitfalls

    Guardian offers three field requirement options:

    • Required: User must provide value

    • Non-required: Optional user input

    • Auto-calculate: Calculated via expressions

    ⚠️ Auto-Calculate Limitation Auto-calculate fields may reference fields from different schemas. If you leave referenced fields empty, the auto-calculate fields won't appear in the indexer.

    Solution: Use non-required fields and implement calculations in custom logic blocks instead:

    Development and Testing Workflow

    Guardian Savepoint Feature

    Use Guardian's savepoint feature to save progress of forms or certification processes, then resume from that stage even after making policy changes and re-triggering dry runs.

    How to Use Savepoints:

    1. Complete part of a workflow (e.g., PDD submission)

    2. Create savepoint before making policy changes

    3. Modify policy blocks

    4. Restore savepoint and continue testing

    This prevents having to fill out long forms repeatedly during development.

    API Development vs Manual Forms

    Tip: Using APIs to submit data is often faster than filling long forms manually during development.

    API Development Workflow:

    1. Fill form manually once with example values

    2. Open Chrome DevTools → Network tab

    3. Submit form and capture the request payload

    4. Extract and modify payload for API testing

    Custom Logic Block Testing

    Thorough Testing Approach

    Test custom logic blocks thoroughly using Guardian's testing features. Make sure all edge cases are covered and output VC documents are correct.

    Testing Process:

    1. Test with Minimal Data: Ensure calculations work with required fields only

    2. Test with Maximum Data: Verify calculations with all optional fields populated

    3. Test Edge Cases: Zero values, negative values, missing optional data

    4. Validate Output Schema: Ensure output VC document matches expected schema

    Document Version History

    Key Feature: In the testing dialog, you can choose document versions from intermediate workflow steps.

    For example, if your workflow is: Document Submission → Tool 1 Processing → Tool 2 Processing → Final Calculation

    You can view intermediate document versions in the History tab of input data to debug calculation progressions.

    Custom Logic Testing Interface

    Debugging Steps:

    1. Select intermediate document version from History tab

    2. Run calculation with that specific version

    3. Compare expected vs actual outputs

    4. Identify where calculations diverge from expectations

    Document Flow Troubleshooting

    Missing Documents in UI

    Common Issue: Document processing is successful, but documents don't appear in the relevant UI section.

    Root Cause: This almost always indicates improper event hooking between workflow blocks.

    Debugging Process:

    1. Check Event Configuration: Verify source and target block events are properly configured

    2. Validate Event Propagation: Ensure events flow from submission block to display block

    3. Review Block Permissions: Confirm the viewing user has permissions for the target block

    Common Event Mistakes:

    • Missing event connections between blocks

    • Incorrect event actor configuration (owner/issuer/initiator)

    • Event disabled accidentally during policy editing

    • Stop propagation is checked

    Event Debugging Checklist

    When documents aren't appearing:

    1. ✅ Source Block Events: Check if source block has output events configured

    2. ✅ Target Block Events: Verify target block has matching input events

    3. ✅ Event Actor: Confirm event actor matches document ownership

    4. ✅ Block Permissions: Ensure viewing user has access to target block

    5. ✅ Policy State: Verify policy is in correct state (published/dry run)

    6. ✅ Browser Cache: Clear cache and refresh (sometimes needed for UI updates)

    Performance and Optimization

    Large Schema Performance

    Issue: Forms with many fields (50+ fields) can load slowly and affect user experience.

    Solutions:

    • Group Related Fields: Use schema composition to break large schemas into logical sections

    • Conditional Fields: Use conditional visibility to show only relevant fields

    • Progressive Disclosure: Show basic fields first, advanced fields on demand

    Common Calculation Issues

    Precision and Rounding

    Issue: JavaScript floating-point arithmetic can cause precision issues in calculations.

    Solution: Use fixed decimal precision for monetary and emission calculations:

    Missing Validation

    Issue: Calculations proceed with invalid or missing input data.

    Solution: Add comprehensive input and output document validation using documentValidatorBlock as well as within code. Use debug function provided to add debug logs to the code.

    Quick Reference Checklist

    Schema Development

    • ✅ Use Excel-first approach for complex schemas

    • ✅ Avoid re-importing identical schemas (creates duplicates)

    • ✅ Edit field keys for readable calculation code

    • ✅ Use custom logic blocks instead of auto-calculate for cross-schema references

    Development Workflow

    • ✅ Use savepoints to preserve testing progress

    • ✅ Capture API payloads from DevTools for faster testing

    • ✅ Test custom logic blocks with all edge cases

    • ✅ Use document history to debug calculation progressions

    Troubleshooting

    • ✅ Check event propagation when documents don't appear

    • ✅ Validate input data before calculations

    • ✅ Use fixed precision for financial/emission calculations

    • ✅ Add delays between bulk API operations


    These practical tips can prevent many common issues and significantly speed up development. Remember that methodical debugging and thorough testing are key to successful Guardian implementations.

    Chapter 4: Methodology Analysis and Decomposition

    When we first tackled digitizing VM0033, we quickly realized that jumping straight into coding or configuration would be overwhelming. A 130-page methodology document with complex calculations needed a systematic approach to break it down into manageable pieces. This chapter shares the analysis approach we developed during VM0033 digitization - what worked, what we learned, and how you can apply these techniques to other methodologies.

    The analysis process transforms a complex PDF into organized components ready for digital implementation. Rather than trying to understand everything at once, we found it more effective to use structured reading techniques that focus on the most important sections for digitization while building understanding progressively.

    Structured Reading Approach for Methodology Analysis

    During VM0033 digitization, we developed a reading approach that prioritizes sections based on their importance for digital implementation. This approach emerged from trial and error - we initially tried to understand everything equally, which led to information overload.

    Reading Priority Order We Used:

    1. Applicability Conditions - Tells us what projects can use this methodology

    2. Quantification of GHG Emission Reductions and Removals - Contains all the math we need to implement

    3. Monitoring - Defines what data users need to collect

    4. Project Boundary - Shows what's included in calculations

    This order worked well because it builds understanding logically. We need to know what projects qualify before diving into calculations, and we need to understand the calculations before figuring out how to collect the required data.

    First Pass - Structure Mapping: Start by reading the table of contents to understand how the methodology is organized. VM0033 follows the standard VCS format with 10 main sections, but we found that Section 8 (Quantification) contains most of the mathematical complexity we needed to implement.

    Second Pass - Core Section Focus: Read the five priority sections thoroughly, taking notes on requirements that need to be implemented digitally. During this pass, we identified calculation procedures, parameter definitions, decision logic, and validation rules that would become digital components.

    Third Pass - Integration Details: Read the remaining sections to understand how the methodology connects to external tools and handles edge cases. This reading helped us understand dependencies and special situations we needed to account for.

    Note-Taking Techniques That Worked

    Focus on Digital Implementation: As we read, we kept asking "What here needs to be automated?" and "What decisions does a user need to make?" This helped us identify the specific elements that would become features in our digital implementation.

    Consistent Marking System: We developed a simple system for marking different types of content - equations got one color, parameters another, decision points a third. This made it easier to find information later when we were building the digital version.

    Cross-Reference Tracking: We noted how different sections referenced each other, especially how the quantification section built on the boundary definitions and how monitoring requirements supported calculations. These connections were important for making sure our digital implementation maintained the methodology's logic.

    Understanding the Three-Actor Workflow

    Most carbon methodologies, including VM0033, work within a standard three-actor certification process. Understanding this workflow was crucial for designing our digital implementation because the platform needed to support all three actors and their interactions.

    The Three Actors:

    Standard Registry (Verra in VM0033's case): The organization that maintains the methodology and oversees the certification process. They approve projects, oversee validation and verification, and issue the final carbon credits.

    Validation and Verification Body (VVB): Independent auditors who check that projects comply with the methodology requirements. They validate project designs initially and verify monitoring results ongoing.

    Project Developer: The organization implementing the restoration project and seeking carbon credits. For VM0033, this would be whoever is planting and maintaining the mangroves.

    How They Interact:

    1. Project Registration: Project developer submits project documents to the registry

    2. Validation: Project developer hires a VVB to validate their project design

    3. Project Approval: Registry approves the project based on VVB validation

    4. Monitoring: Project developer collects data and submits monitoring reports

    When we designed the Guardian policy for VM0033, we built this workflow into the platform so that each actor has appropriate permissions and can only see and do what they're supposed to according to their role.

    VM0033 Specific Considerations

    For the Allcot ABC Mangrove project, we focused on mangrove restoration as the primary activity. The project involves planting mangroves in coastal areas where they had been lost or degraded. This kept our initial implementation focused rather than trying to handle all possible restoration activities that VM0033 theoretically allows.

    The three-actor workflow works well for mangrove projects because:

    • Project developers can focus on planting and monitoring mangroves

    • VVBs can verify that restoration activities meet VM0033 requirements

    • The registry can issue credits knowing the work has been independently validated

    Parameter Extraction and Organization

    One of the most time-consuming parts of analysis was identifying all the parameters (data inputs) that users would need to provide. VM0033 has many parameters scattered throughout the document, and some are used in multiple calculations.

    Parameter Types We Identified:

    Monitored Parameters: Data that project developers collect through measurements. For mangrove projects, this includes things like tree diameter measurements, survival rates, soil samples, and water level measurements.

    User-Input Parameters: Project-specific information that users provide during setup. This includes project area size, crediting period length, restoration activities planned, and location details.

    Default Values: Standard values provided by VM0033 that can be used when site-specific measurements aren't available. These include default growth rates, carbon content factors, and emission factors.

    Calculated Parameters: Values that get computed from other parameters using equations in the methodology. These form chains of calculations that we needed to map carefully.

    Parameter Organization Approach

    Systematic Extraction: We went through each section methodically, making lists of every parameter mentioned, along with its definition, units, and where it gets used. This was tedious but essential for making sure we didn't miss anything.

    Reuse Identification: Many parameters appear in multiple calculations. Identifying these reuse opportunities helped us design efficient data collection where users enter information once and it gets used wherever needed.

    Validation Requirements: Each parameter has requirements about valid ranges, formats, or dependencies. We documented these during analysis because they would become validation rules in our digital implementation.

    Introduction to Recursive Analysis

    When we first looked at VM0033's final calculation equation, it seemed simple. But we quickly realized that each term in that equation depends on other calculations, which depend on still other calculations, creating a complex web of dependencies.

    Starting Point: VM0033's goal is calculating Net GHG Emission Reductions and Removals (NERRWE). The basic equation is:

    NERRWE = BE - PE - LK

    Where:

    • NERRWE = Net emission reductions from the wetland project

    • BE = Baseline emissions (what would have happened without the project)

    • PE = Project emissions (emissions from project activities)

    • LK = Leakage (emissions that might occur elsewhere because of the project)

    The Challenge: Each of these terms (BE, PE, LK) has its own complex calculations with many sub-components. To implement this digitally, we needed to trace back from the final answer to identify every piece of data a user would need to provide.

    Recursive Approach: Starting with NERRWE, we asked "What do we need to calculate this?" Then for each dependency, we asked the same question, continuing until we reached basic measured values or user inputs. This created a tree-like structure showing all the calculation dependencies.

    Benefits of This Approach

    Complete Parameter Discovery: Working backward from final results ensured we found all required inputs, even ones that are referenced indirectly through multiple calculation layers.

    Logical Implementation Order: Understanding dependencies helped us sequence implementation so that basic inputs are collected before calculations that depend on them.

    Validation Points: The dependency tree showed us where validation should happen - we could catch problems early rather than only discovering them at the final calculation stage.

    Tools and External References

    VM0033 references several external calculation tools that we needed to understand and integrate. During our first digitization attempt, we implemented the ones that were most essential for the mangrove restoration focus.

    Reference Materials: For detailed VM0033 analysis, consult the and in our Artifacts Collection.

    Tools We Implemented:

    AR-Tool05: This CDM tool calculates emissions from fossil fuel use during project activities. For mangrove projects, this covers emissions from boats, equipment, and transportation used during planting and monitoring.

    AR-Tool14: This CDM tool estimates carbon stocks in trees and shrubs using standard equations. We used this for calculating carbon storage in mangrove biomass as the trees grow.

    AFLOU Non-permanence Risk Tool: This VCS tool assesses the risk that carbon benefits might be reversed. For mangrove projects, this considers risks like storm damage, disease, or land use changes.

    Tool Integration Approach

    Understanding Tool Purpose: For each tool, we figured out what specific problem it solves and how that fits into the overall VM0033 calculation framework.

    Data Flow Mapping: We traced how data flows between VM0033 calculations and the external tools - what information goes in, what results come out, and how those results get used in other calculations.

    Implementation Decisions: Rather than trying to implement every referenced tool perfectly, we focused on the core functionality needed for mangrove projects. This kept our initial implementation manageable while still meeting methodology requirements.

    VM0033 Analysis Walkthrough

    Let's walk through how we applied these analysis techniques to specific parts of VM0033, using examples from our actual digitization work.

    Applicability Analysis: VM0033 Section 4 defines what projects can use the methodology. For mangrove restoration, the key requirements are that projects restore degraded tidal wetlands through activities like replanting native species and improving hydrological conditions. We identified the specific criteria that our digital implementation needed to check during project registration.

    Calculation Structure: Section 8 contains VM0033's mathematical core. We found that baseline emissions calculations (what would happen without restoration) were quite complex, involving soil carbon loss, methane emissions, and biomass decay. Project emissions were simpler for mangrove planting but still required careful tracking of fossil fuel use and disturbance effects.

    Monitoring Requirements: Sections 9.1 and 9.2 define what data projects need to collect. For mangrove restoration, this includes regular measurements of tree survival, growth rates, soil conditions, and water levels. We organized these into data collection schedules that could be built into the Guardian interface.

    Practical Lessons Learned

    Start Simple: We initially tried to handle all possible restoration activities VM0033 allows, but this created too much complexity. Focusing on mangrove planting first gave us a working system that we could expand later.

    Document Everything: Even seemingly small details about parameter definitions or calculation procedures became important during implementation. Good documentation during analysis saved time later.

    Test Understanding: We regularly tested our understanding by trying to work through example calculations manually. This helped us catch misunderstandings before they became implementation problems.

    From Analysis to Implementation Planning

    The analysis work creates a foundation for the more detailed equation mapping and parameter identification that comes in Chapter 5. Here's how the analysis results feed into subsequent work.

    Parameter Lists: The parameters we identified during analysis become the basis for detailed dependency mapping in Chapter 5.

    Calculation Structure: Our understanding of how VM0033's calculations fit together guides the recursive analysis work that systematically maps every mathematical dependency.

    Tool Integration: The external tools we identified need detailed integration planning, which we'll cover in Chapter 6.

    Validation Framework: The validation requirements we identified during analysis inform the test artifact development in Chapter 7.


    Analysis Summary and Next Steps

    Analysis Foundation Complete: You now understand the systematic approach we used to break down VM0033 into implementable components.

    Key Analysis Outcomes:

    Preparation for Chapter 5: Your parameter extraction work and understanding of calculation structure from this chapter will be essential for the detailed equation mapping we'll cover next. Chapter 5 builds directly on this foundation to create complete mathematical dependency maps.

    Applying to Other Methodologies: While we used VM0033 as our example, these analysis techniques apply to other environmental methodologies. The structured reading approach, parameter extraction methods, and recursive analysis concepts work for any methodology you might want to digitize.

    Learning from Experience: These techniques represent what we learned during VM0033 digitization. They worked for us, but you might find improvements or adaptations that work better for your specific methodology or implementation approach.

    Table of Contents

    Navigation Tip: Use the sidebar navigation or click on any chapter title to jump directly to detailed chapter outlines.

    Part I: Foundation and Preparation

    Understanding the digitization process, Guardian platform capabilities, and the role of VM0033 as our reference methodology. This chapter establishes the context and objectives for methodology digitization.

    Deep dive into the VM0033 methodology structure, applicability conditions, baseline scenarios, and emission reduction calculations. This chapter provides the domain knowledge foundation needed before digitization begins.

    Comprehensive introduction to Guardian's architecture, Policy Workflow Engine (PWE), schema system, and key concepts specifically relevant to methodology digitization.

    Systematic approach to reading and analyzing methodology PDFs, identifying key components, stakeholders, and workflow requirements. Includes techniques for extracting calculation logic and parameter dependencies using industry-proven recursive analysis techniques.

    Step-by-step process for identifying all equations used in baseline emissions, project emissions, and leakage calculations. Covers recursive parameter analysis and dependency mapping using VM0033 examples with comprehensive mathematical component extraction.

    Understanding and incorporating external tools and modules referenced in methodologies. Covers CDM tools, VCS modules, and other standard calculation tools used in VM0033, including unified calculation framework development.

    Creating comprehensive test spreadsheets containing all input parameters, output parameters, and final emission reduction calculations. This artifact becomes the validation benchmark for the digitized policy, with real VM0033 test artifact examples.

    Guardian schema system fundamentals, JSON Schema integration, and two-part architecture patterns. Establishes field mapping principles and architectural understanding for methodology schema development.

    Step-by-step Excel-first approach to building comprehensive PDD schemas. Covers Guardian template usage, conditional logic implementation, sub-schema creation, and essential field key management for calculation code readability.

    Time-series monitoring schema development with temporal data structures, annual parameter tracking, and field key management for time-series calculations. Includes VVB verification workflow support.

    API schema management, standardized property definitions, Required field types (None/Hidden/Required/Auto Calculate), and UUID management for efficient schema development and maintenance.

    Practical schema validation using Guardian's testing features including Default/Suggested/Test values, preview testing, UUID integration, and pre-deployment checklist for production readiness.

    Guardian policy architecture fundamentals, workflow block system, event-driven communication, and design patterns. Establishes core concepts for building production-ready environmental policies using VM0033 as the implementation reference.

    Complete guide to Guardian's workflow blocks including interfaceDocumentsSourceBlock, buttonBlock, requestVcDocumentBlock, and role management. Covers block configuration, permissions, event routing, and UI integration with practical VM0033 examples.

    Deep technical analysis of VM0033 policy implementation using actual JSON configurations. Covers VVB approval workflows, project submission processes, and role-based access patterns with real Guardian block configurations extracted from production policy.

    Advanced policy implementation patterns including transformation blocks for Verra API integration, document validation blocks, external data integration, policy testing frameworks, and demo mode configuration using VM0033 production examples.

    Comprehensive guide to implementing VM0033 emission reduction calculations using Guardian's customLogicBlock. Covers baseline emissions, project emissions, leakage calculations, and final net emission reductions using real JavaScript implementation with VM0033 test artifacts validation.

    Brief foundation chapter establishing FLD concepts for parameter relationship management in Guardian methodologies. Covers parameter reuse patterns and integration with customLogicBlock calculations using VM0033 examples.

    Complete guide to building Guardian Tools using AR Tool 14 as practical example. Covers Tools as mini-policies, extractDataBlock workflows, customLogicBlock integration, and production implementation patterns for standardized calculation tools that integrate with multiple methodologies.

    Comprehensive testing using Guardian's built-in testing capabilities including dry-run mode and customLogicBlock testing interface. Covers interactive testing with three input methods, validation against VM0033 test artifacts, testing at every calculation stage, and API-based automated testing using Guardian's REST APIs.

    Testing complete methodology workflows across all stakeholder roles using Guardian's dry-run capabilities and VM0033 production patterns. Covers multi-role testing frameworks, virtual user management, production-scale data validation, and cross-component integration testing.

    Automating methodology operations using Guardian's REST API framework. Covers authentication patterns, VM0033 policy block API structure, dry-run operations with virtual users, automated workflow execution, and Cypress testing integration for production deployment.

    Part VII: Deployment and Maintenance

    Chapter 24: User Management and Role Assignment

    🚧 In Development - Setting up user roles, permissions, and access controls for different stakeholders in the methodology workflow. Covers user onboarding, organization management, security policies, and role-based access controls.

    Chapter 25: Monitoring and Analytics - Guardian Indexer

    🚧 In Development - Implementing monitoring, logging, and analytics for deployed methodologies using Guardian Indexer. Covers usage analytics, compliance reporting, audit trails, and performance monitoring.

    Chapter 26: Maintenance and Updates

    🚧 In Development - Strategies for maintaining deployed methodologies, handling methodology updates, and managing backward compatibility. Covers version management, bug fixing, and regulatory change management.

    ✅ Available - Bidirectional data exchange between Guardian and external platforms. Covers data transformation using dataTransformationAddon blocks and external data reception using MRV configuration patterns.

    ✅ Available - Practical tips and solutions for common problems encountered during methodology digitization. Covers schema development pitfalls, development workflow optimization, custom logic testing, and event troubleshooting.

    Part IX: Appendices and References

    Appendix A: VM0033 Complete Implementation Reference

    Complete code examples, schema definitions, and configuration files for the VM0033 implementation.

    Appendix B: Guardian Block Reference Guide

    Quick reference guide for all Guardian policy workflow blocks with methodology-specific usage examples.

    Appendix C: Calculation Templates and Examples

    Reusable calculation templates and examples for common methodology patterns.

    Appendix D: Testing Checklists and Templates

    Comprehensive checklists and templates for testing methodology implementations.

    Appendix E: API Reference for Methodology Developers

    Focused API documentation for methodology-specific use cases and automation.

    Appendix F: Glossary and Terminology

    Comprehensive glossary of terms used in methodology digitization and Guardian platform.


    Chapter Organization

    Consistent Structure: Each chapter follows the same format for easy navigation and learning.

    Section
    Description

    Estimated Reading Time

    Total Time: 20-30 hours

    Comprehensive coverage of all aspects of methodology digitization from foundation to advanced topics.

    Part I-III: 12-16 hours

    Essential knowledge for understanding Guardian platform and designing data structures.

    Part IV-V: 8-11 hours

    Core implementation skills for policy workflows and calculation logic.

    Part VI-VIII: 5-8 hours

    Prerequisites

    Before You Begin: Ensure you have the following prerequisites in place.

    • Basic understanding of environmental methodologies and carbon markets

    • Familiarity with JSON and basic programming concepts

    • Access to Guardian platform instance for hands-on practice

    • VM0033 methodology document for reference


    Next Steps: Ready to begin? Start with the detailed chapter outlines or jump directly to Chapter 1.

    Chapter 7: Test Artifact Development

    Creating comprehensive test artifacts was one of the most valuable parts of our VM0033 digitization work, and we couldn't have done it without Verra's help. The test artifacts became our foundation for schema design, calculation verification, and ongoing validation. This chapter explains how we worked with Verra to develop test cases using real Allcot project data and how these artifacts guided every aspect of our implementation.

    The collaboration with Verra was crucial because they provided the methodology expertise needed to create realistic test scenarios, while Allcot provided real project data from their ABC Mangrove Senegal project. This combination gave us authentic test cases that reflected actual project conditions rather than hypothetical examples we might have created ourselves.

    The Collaborative Approach with Verra

    When we started digitization work, we realized that creating accurate test cases would require deep methodology expertise that we didn't have. We needed someone who understood VM0033's intricacies and could create test scenarios that properly exercised all the calculation pathways we had identified through recursive analysis.

    Verra's Contribution: Verra brought methodology expertise to help us understand which test scenarios would be most valuable and how to structure test cases that would validate both individual calculations and overall methodology compliance.

    Allcot's Data Contribution: Allcot provided real project data from their ABC Mangrove Senegal project, including:

    • Actual PDD data with site-specific parameters

    • Real emission reduction calculations from their project development work

    • Authentic assumptions about growth rates, mortality, and site conditions

    • Practical boundary condition decisions for a working mangrove project

    Our Role: We provided the technical framework needs - what parameters we needed, how calculations would be structured in Guardian, and what validation scenarios would help us verify digital implementation accuracy.

    The Result: Two comprehensive Excel artifacts that became our validation benchmarks - the detailed test case artifact and the original ER calculations from Allcot's PDD work.

    Why This Collaboration Worked

    Real Project Grounding: Using actual Allcot project data meant our test cases reflected real-world conditions and decision-making rather than theoretical scenarios.

    Methodology Validation: Verra's involvement ensured our test cases properly interpreted VM0033 requirements and followed accepted calculation procedures.

    Implementation Focus: Our technical requirements kept the test development focused on what we actually needed for digitization rather than creating comprehensive academic examples.

    Understanding the Allcot ABC Mangrove Project Data

    The Allcot ABC Mangrove Senegal project provided an ideal test case because it represented a straightforward mangrove restoration approach with well-documented assumptions and calculations.

    Project Characteristics:

    • Total Area: 7,000 hectares across 4 strata with different baseline conditions

    • Planting Approach: Manual propagule planting by local communities - no heavy machinery

    • Species Focus: Rhizophora mangle (red mangrove) with known allometric equations

    • Timeframe: 40-year crediting period starting in 2022

    Key Project Parameters from ER Calculations:

    • Planting Density: 5,500 trees per hectare initially planted

    • Growth Model: Chapman-Richards function for DBH growth over time

    • Allometric Equation: Ln(AGB) = 5.534244 + 2.404770 * Ln(DBH)

    • Root:Shoot Ratio: 0.29 for below-ground biomass calculations

    Boundary Simplifications:

    • No fire reduction premium (eliminated fire calculations)

    • No fossil fuel emissions (simple planting activities)

    • Mineral soil only (no peat calculations)

    • No wood products (no harvesting planned)

    How Project Data Informed Test Scenarios

    Realistic Parameter Ranges: The Allcot data showed us realistic ranges for key parameters - growth rates that reflect actual site conditions, mortality patterns based on field experience, and carbon accumulation rates based on literature and site measurements.

    Calculation Complexity: The project showed us how many calculations were actually needed vs. the full VM0033 complexity. This helped us focus test development on calculations that would actually be used.

    Multi-Stratum Scenarios: With 4 different strata having different baseline biomass levels (1149, 2115, 2397, 1339 t C/ha), we could test how calculations handle different starting conditions and scaling across project areas.

    Test Artifact Structure and Organization

    The test artifacts we developed with Verra create a comprehensive validation framework organized around VM0033's calculation structure.

    Primary Test Case Artifact: VM0033_Allcot_Test_Case_Artifact.xlsx This artifact contains the complete parameter set and calculation framework needed for Guardian implementation:

    Project Boundary Definition: Documents exactly which carbon pools and emission sources are included/excluded, providing the conditional logic needed for Guardian's schema design.

    Quantification Approach Selection: Shows which calculation methods are used (field data vs. proxies, stock approach vs. flow approach) and when different parameters are required.

    Stratum-Level Parameters: Complete parameter sets for all 4 project strata, showing how site conditions vary and how this affects calculation requirements.

    Temporal Boundaries: Peat depletion time (PDT) and soil organic carbon depletion time (SDT) calculations for each stratum, though simplified for mineral soil conditions.

    Annual Calculation Framework: Year-by-year calculations from 2022 to 2061 showing how parameters change over time and how calculations scale across the 40-year crediting period.

    Monitoring Requirements: Complete parameter lists organized by validation vs. monitoring periods, showing when different data needs to be collected.

    Supporting ER Calculations Artifact

    Original Allcot Calculations: ER_calculations_ABC Senegal.xlsx This artifact contains the original project calculations that Allcot developed for their PDD:

    Assumptions and Parameters: Detailed documentation of all project assumptions including growth models, mortality rates, allometric equations, and site-specific factors.

    Growth Projections: Complete DBH growth projections using Chapman-Richards model, providing year-by-year diameter estimates that feed into biomass calculations.

    Calculation Results: Annual emission reduction calculations over the 40-year period, providing expected results that our digital implementation should match.

    Validation Benchmarks: Final totals and annual averages that became our accuracy targets during implementation testing.

    How Test Artifacts Guided Schema Design

    The test artifacts became our primary reference during Guardian schema development because they showed us exactly what data users would need to provide and how it would be structured.

    PDD Schema Requirements: The project boundary and quantification approach selections from the test artifact directly translated into conditional field requirements in our PDD schema design.

    Monitoring Report Structure: The annual calculation requirements showed us which parameters needed to be collected each year vs. only at validation, informing our monitoring report schema organization.

    Parameter Grouping: The test artifact's organization by strata, time periods, and calculation components helped us design schema sections that match how users actually think about project data.

    Validation Logic: The conditional parameter requirements (like "when fire reduction premium = true") became validation rules in our schema design that show/hide fields based on user selections.

    From Test Artifact to Guardian Implementation

    Direct Translation: Many sections of the test artifact could be directly translated into Guardian schema fields. For example, the stratum-level input parameters became repeating sections in our project schema.

    Calculation Verification: The test artifact calculations became our verification benchmark - our Guardian implementation needed to produce the same results using the same input parameters.

    User Experience Insights: Seeing how parameters were organized in the test artifact helped us understand how to structure Guardian forms and data collection workflows.

    Verification and Validation Process

    The test artifacts enabled systematic verification of our Guardian implementation by providing known-good calculation results that we could compare against our digital calculations.

    Baseline Verification: Using the test artifact's baseline biomass values and parameters, we verified that our Guardian calculations produced matching baseline calculations.

    Project Calculation Testing: The annual growth projections and biomass calculations from the test artifact became our benchmark for testing AR-Tool14 integration and biomass calculation accuracy.

    Net Emission Reductions: The final ER calculations provided year-by-year targets that our complete Guardian implementation needed to match within acceptable precision tolerances.

    Parameter Validation: The test artifact showed us which parameter combinations were valid and which should trigger validation errors, informing our schema validation rule design.

    Testing Methodology We Used

    Individual Component Testing: We tested each calculation component (baseline, project, leakage) separately using test artifact parameters to isolate any calculation errors.

    Integration Testing: After individual components worked correctly, we tested the complete calculation chain using full test artifact scenarios.

    Precision Analysis: We documented acceptable precision differences between our calculations and test artifact results, accounting for rounding differences and calculation sequence variations.

    Edge Case Testing: The test artifact parameters helped us identify edge cases (like zero values, boundary conditions) that needed special handling in our implementation.

    Real-World Application Benefits

    Having comprehensive test artifacts based on real project data provided benefits throughout our digitization work and continues to be valuable for ongoing development.

    Implementation Confidence: Knowing our calculations matched real project calculations gave us confidence that our Guardian implementation would work correctly for actual projects.

    Schema Validation: The test artifacts helped us verify that our Guardian schemas could handle real project complexity and data requirements.

    User Testing: When we tested Guardian with potential users, having realistic test data made the testing sessions much more meaningful than using hypothetical examples.

    Documentation Reference: The test artifacts became our reference for writing user documentation and help text, providing concrete examples of how parameters are used.

    Quality Assurance: Ongoing development work uses the test artifacts as regression tests to ensure code changes don't break existing calculation accuracy.

    Long-Term Value

    Maintenance Reference: When we need to modify calculations or add new features, the test artifacts provide a comprehensive reference for ensuring changes maintain calculation accuracy.

    Expansion Foundation: If we extend Guardian to handle additional VM0033 features or variations, the test artifacts provide a foundation for developing additional test scenarios.

    Training Resource: The test artifacts help new team members understand VM0033 requirements and Guardian implementation by providing concrete examples of complete calculation scenarios.

    Lessons from Test Artifact Development

    Collaboration is Essential: We could not have created effective test artifacts without Verra's methodology expertise and Allcot's real project data. The collaborative approach was crucial for creating useful validation tools.

    Real Data Matters: Using actual project data rather than hypothetical scenarios made our test artifacts much more valuable for validating implementation accuracy and user experience.

    Comprehensive Coverage: Attempting to create test scenarios that cover all calculation pathways, parameter combinations, and edge cases requires systematic organization and significant effort.

    Living Documents: Test artifacts need to be maintained and updated as understanding improves and requirements evolve. We continue to reference and occasionally update our artifacts based on implementation experience.

    Implementation Integration: Test artifacts are most valuable when they're designed from the beginning to support the specific implementation work being done, rather than created as general methodology examples.


    Test Artifact Development Summary and Implementation Readiness

    Validation Framework Complete: You now understand how collaborative test artifact development creates the foundation for accurate methodology digitization.

    Key Test Development Outcomes:

    Implementation Readiness: The systematic analysis and planning work completed in Part II provides comprehensive foundation for technical implementation. The methodology analysis, equation mapping, tool integration, and test artifact development create detailed requirements and validation frameworks that directly support schema design and policy development.

    Real-World Validation: Using actual project data from a real mangrove restoration project ensures that digitization work addresses practical implementation needs rather than theoretical scenarios, improving the likelihood of successful deployment and user adoption.

    Collaborative Success: The test artifact development demonstrates the value of combining technical digitization expertise with domain knowledge and real project experience to create comprehensive validation frameworks.

    Field 1 - Description: "Biomass density of vegetation in stratum i"
    Field 2 - Unit: "t d.m. ha-1"
    Field 3 - Equation: "Equation 15, Equation 23"
    Field 4 - Source of data: "Field measurements or literature values"
    Field 5 - Value applied: [Stratum-specific data table]
    Field 6 - Justification: [Required text explanation]
    # Example: Check for duplicate schemas via API
    GET /api/v1/schemas
    # Look for schemas with identical names but different UUIDs
    // Before: Unreadable keys from Excel import
    document.credentialSubject.field_1
    document.credentialSubject.field_2
    
    // After: Readable keys after manual editing
    document.credentialSubject.projectArea
    document.credentialSubject.emissionReductions
    // In customLogicBlock instead of auto-calculate
    const projectArea = document.credentialSubject.projectArea || 0;
    const emissionFactor = artifacts[0].emissionFactor || 1;
    const totalEmissions = projectArea * emissionFactor;
    
    // Output with calculated value
    outputDocument.credentialSubject.calculatedEmissions = totalEmissions;
    // Use for API testing
    await fetch(`/api/v1/policies/${policyId}/blocks/${blockId}`, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify(pddPayload)
    });
    // Problem: Floating point precision
    const result = 0.1 + 0.2; // 0.30000000000000004
    
    // Solution: Fixed precision
    const emissionReductions = Math.round((baseline - project) * 100) / 100;
    const monetaryValue = Math.round(emissionReductions * carbonPrice * 100) / 100;

    Mortality Assumptions: 50% mortality overall, with specific patterns over time

    Carbon Fraction: 0.47 for converting biomass to carbon content

  • Soil Carbon Rate: 1.83 t C/ha/year after allochthonous carbon deduction

  • Verification methodology for validating digital implementation accuracy against known-good calculations
  • Baseline Scenario - Explains the reference point for calculations

    Verification: Project developer hires VVB to verify their monitoring results

  • Credit Issuance: Registry issues credits based on VVB verification

  • External tool identification with integration requirements
  • parsed methodology document
    test case artifact

    Common Issues

    Troubleshooting and problem-solving

    Best Practices

    Recommendations and optimization tips

    Chapter Summary

    Key takeaways and next steps

    Production deployment, maintenance, and advanced techniques.

    Learning Objectives

    What you'll accomplish in this chapter

    Prerequisites

    Required knowledge or completed previous chapters

    Conceptual Overview

    Theory and background information

    VM0033 Example

    Practical application using our reference methodology

    Step-by-Step Implementation

    Detailed instructions with code/configuration

    Testing and Validation

    How to verify your implementation

    Chapter 1: Introduction to Methodology Digitization
    Chapter 2: Understanding VM0033 Methodology
    Chapter 3: Guardian Platform Overview for Methodology Developers
    Part II: Analysis and Planning
    Chapter 4: Methodology Analysis and Decomposition
    Chapter 5: Equation Mapping and Parameter Identification
    Chapter 6: Tools and Modules Integration
    Chapter 7: Test Artifact Development
    Part III: Schema Design and Development
    Chapter 8: Schema Architecture and Foundations
    Chapter 9: Project Design Document (PDD) Schema Development
    Chapter 10: Monitoring Report Schema Development
    Chapter 11: Advanced Schema Techniques
    Chapter 12: Schema Testing and Validation Checklist
    Part IV: Policy Workflow Design and Implementation
    Chapter 13: Policy Workflow Architecture and Design Principles
    Chapter 14: Guardian Workflow Blocks and Configuration
    Chapter 15: VM0033 Implementation Deep Dive
    Chapter 16: Advanced Policy Patterns
    Part V: Calculation Logic Implementation
    Chapter 18: Custom Logic Block Development
    Chapter 19: Formula Linked Definitions (FLDs)
    Chapter 20: Guardian Tools Architecture and Implementation
    Chapter 21: Calculation Testing and Validation
    Part VI: Integration and Testing
    Chapter 22: End-to-End Policy Testing
    Chapter 23: API Integration and Automation
    Part VIII: Advanced Topics and Best Practices
    Chapter 27: Integration with External Systems
    Chapter 28: Troubleshooting and Common Issues
    10
    11
    12
    15
    16
    17

    Chapter 11: Advanced Schema Techniques

    This chapter covers essential advanced techniques for schema management that extend beyond the Excel-first approach. You'll learn API-based schema operations, field properties, the four Required field types, and UUID management for efficient schema development.

    These techniques are crucial for efficient schema management, especially when working with complex methodologies or managing multiple schemas across policies.

    API-Based Schema Management

    While Excel-first approach works well for initial development, API operations maybe be helpful for schema updates, bulk operations, and automated workflows if you're familiar with backend programming. Guardian provides comprehensive schema APIs for create, read, update, and delete operations.

    When to Use Schema APIs

    API operations are essential for:

    • Schema Updates: Modifying existing schemas without rebuilding from Excel

    • Bulk Operations: Managing multiple schemas across different policies

    • Integration: Connecting schema management to external workflows

    • Field Key Updates: Programmatically renaming field keys for better calculation code

    For detailed API operations, see Schema Creation Using APIs.

    Importance of Good Key Names

    Field key names are crucial for calculation code readability and maintenance. Good key names become especially important when schemas are used in complex calculations and policy workflows.

    Field Key Naming Best Practices

    Good Field Keys:

    • biomass_density_stratum_i - Clear parameter identification

    • carbon_stock_baseline_t - Indicates baseline value at time t

    • emission_reduction_annual - Descriptive of calculation purpose

    Poor Field Keys: Keep in mind that excel imports result in default cell IDs set as key names -

    • field0, field1 - No semantic meaning

    • biomassDensity - Lacks context (which stratum? units?)

    • carbonStock - Ambiguous (baseline? project? which period?)

    Impact on Calculation Code:

    Standardized Property Definitions

    Guardian's Property Glossary provides standardized data definitions based on the GBBC dMRV Specification that ensure data consistency and comparability across different methodologies and projects. These standardized properties enable interoperability and universal data mapping.

    Understanding Standardized Properties

    For complete property definitions, see Available Schema Types and Property Glossary.

    Purpose of Standardized Properties:

    • Data Consistency: Ensure uniform interpretation of data across different methodology schemas

    • Cross-Methodology Comparability: Enable comparison of projects using different methodologies

    • Enhanced Searchability: Allow efficient data retrieval across the Guardian ecosystem

    • GBBC Compliance: Align with industry-standard dMRV specifications

    Key Standardized Property Categories

    Organization Properties:

    • AccountableImpactOrganization: Project developers and responsible entities

    • Signatory: Agreement signatories with defined roles (IssuingRegistry, ValidationAndVerificationBody, ProjectOwner, VerificationPlatformProvider)

    • Address: Standardized address format with addressType, city, state, country

    Project Properties:

    • ActivityImpactModule: Core project information including classification (Carbon Avoidance/Reduction/Removal)

    • GeographicLocation: Standardized location data with longitude, latitude, geoJsonOrKml

    • MitigationActivity: Mitigation activity classification and methods

    Credit Properties:

    • CRU (Carbon Reduction Unit): Standardized carbon credit structure with quantity, unit, vintage, status

    • REC (Renewable Energy Certificate): Renewable energy certificate format with recType, validJurisdiction

    • CoreCarbonPrinciples: Core carbon principles compliance including generationType, verificationStandard

    Verification Properties:

    • Validation: Standardized validation structure with validationDate, validatingPartyId, validationMethod

    • VerificationProcessAgreement: Verification agreements with signatories, qualityStandard, mrvRequirements

    • Attestation: Attestation structure with attestor, signature, proofType

    Using Standardized Properties in Schemas

    Example: Geographic Location Implementation:

    Using standardized GeographicLocation structure:

    • longitude (string): Longitude coordinate

    • latitude (string): Latitude coordinate

    • geoJsonOrKml (string): Geographic boundary data

    Example: Carbon Credit Implementation:

    Using standardized CRU structure:

    • quantity (string): Amount of credits

    • unit (enum): CO₂e or other unit specification

    • vintage (string): Year of emission reduction

    Benefits of Standardized Properties

    Cross-Methodology Interoperability: Projects can transition between methodologies while preserving core data structure:

    Registry Aggregation: Registries can aggregate and compare data from different methodology implementations using consistent property structures.

    Automated Quality Control: Standardized properties include built-in validation rules ensuring data consistency and preventing incomplete submissions.

    Four Types of Required Field Settings

    Guardian provides four distinct Required field settings that control field behavior and visibility. Understanding these types is crucial for proper schema design.

    Required Field Types

    1. None

    • Behavior: Optional field, visible to users

    • Use Case: Optional project information, supplementary data

    • Example: project_website_url, additional_notes

    2. Hidden

    • Behavior: Not visible to users, used for system data or autocalculatable fields where expression is defined within custom logic block

    • Use Case: Net VCUs, baseline emission final calculations

    • Example: net_VCUs_to_mint, baseline_emissions_tCO2e, project_crediting_period

    3. Required

    • Behavior: Must be completed by users before submission

    • Use Case: Essential project data, regulatory requirements

    • Example: project_title, project_developer, start_date

    4. Auto Calculate

    • Behavior: Not visible to users, calculated automatically

    • Use Case: LHS parameters of equations, intermediate calculation results

    • Assignment: Must be assigned via expression field or custom logic block

    Auto Calculate Field Details

    Auto Calculate fields are essential for methodology calculations but require special handling:

    Purpose:

    • Store Left-Hand Side (LHS) parameters of methodology equations

    • Hold intermediate calculation results for complex formulas

    • Maintain calculated values for audit trails and verification

    Assignment Methods:

    1. Expression Field: Set calculation formula directly in schema UI. Note that via UI, you'd only be able to access variables within that particular schema. Subschema or other schema variables won't be available.

    2. Custom Logic Block: Assign values through policy calculation blocks, this is the most powerful and comprehensive approach so far.

    Auto Calculate Example:

    Calculation Assignment:

    Schema UUIDs and Efficient Development

    Every Guardian schema receives a unique identifier (UUID) when created. Understanding and leveraging schema UUIDs enables efficient development workflows, especially for large-scale policy management.

    Schema UUID Structure

    Guardian schema UUIDs follow this format:

    UUID Properties:

    • Unique: Each schema gets a distinct identifier

    • Persistent: UUID remains constant after schema creation

    • Reference: Used in policy blocks to reference specific schemas

    • Immutable: UUID doesn't change when schema content is updated

    UUID Benefits for Development

    1. Bulk Find and Replace Operations

    When updating policies with new schema versions, UUIDs enable efficient bulk operations. Open policy JSON in your favorite editor and do a complete find and replace instead of manually selecting schema from dropdown at multiple places.

    2. Policy Block Configuration

    Policy workflow blocks reference schemas by UUID:

    Best Practices Summary

    API Management: Use APIs for schema updates and bulk operations rather than recreating schemas from Excel.

    Field Key Quality: Invest time in meaningful field key names during initial development - changing them later requires calculation code updates.

    Required Type Planning: Choose appropriate Required types based on field purpose:

    • Use Auto Calculate for methodology equation results (only simple ones accessing variables from same schema)

    • Use Required for essential user inputs

    • Use Hidden for intermediate results or calculation related fields defined in custom logic block

    • Use None for optional information

    Testing Integration: Test schema changes across all policy blocks that reference the schema UUIDs.

    Ready for Next Steps

    This chapter covered the essential advanced techniques: API schema management, proper field naming, Required field types, and UUID management. These concepts are fundamental for efficient methodology implementation and policy management.

    The next chapter focuses on testing and validation checklist that ensure schema implementations meet production requirements and maintain accuracy across complex methodology calculations.

    Chapter 10: Monitoring Report Schema Development

    This chapter teaches you how to build monitoring report schemas that handle time-series data collection and calculation updates. You'll learn the exact field-by-field process used for VM0033's monitoring schema, building on the PDD foundation from Chapter 9.

    By the end of this chapter, you'll know how to create the structure yourself, understanding temporal data management, annual parameter tracking, and calculation update mechanisms.

    Monitoring Schema Purpose and Structure

    Monitoring schemas extend your PDD implementation to handle ongoing project operations over crediting periods. Unlike PDD schemas that capture initial project design, monitoring schemas handle:

    Artifacts Collection

    Comprehensive collection of test artifacts, calculation implementations, Guardian tools, and reference materials for methodology digitization

    Overview

    This directory contains essential artifacts used throughout the methodology digitization process, including real test data, production implementations, Guardian tools, and validation materials. All artifacts have been tested and validated for accuracy against their respective environmental methodologies.

    Methodology Digitization Handbook

    A comprehensive guide to digitizing environmental methodologies on Guardian platform

    Summary

    The Methodology Digitization Handbook is a comprehensive guide for transforming environmental methodologies from PDF documents into fully functional, automated policies on the Guardian platform. Using VM0033 (Methodology for Tidal Wetland and Seagrass Restoration) as our primary reference example, this handbook provides step-by-step instructions, best practices, and real-world examples for every aspect of the digitization process.

    Chapter 3: Guardian Platform Overview

    Guardian is a production platform specifically engineered for digitizing environmental certification processes and creating verifiable digital assets. This chapter provides the technical foundation for understanding how complex methodologies like VM0033 are transformed into automated, blockchain-verified workflows that maintain scientific rigor while dramatically improving process efficiency.

    Technical Focus Areas:

    • Architecture Design: How Guardian's microservices architecture supports methodology complexity at scale

    • Policy Workflow Engine: The core system that converts methodology requirements into executable digital processes

    Chapter 13: Policy Workflow Architecture and Design Principles

    Understanding Guardian's Policy Workflow Engine and connecting your Part III schemas to automated certification workflows

    Part III gave you production-ready schemas. Chapter 13 transforms those static data structures into living, breathing policy workflows that automate your entire methodology certification process.

    Guardian's Policy Workflow Engine (PWE) operates on a simple but powerful principle: connect modular blocks together to create sophisticated automation. Think of it like building with LEGO blocks, where each block serves a specific purpose but gains meaning through its connections with others.

    Guardian's Building Block Philosophy

    VM0033 Reference Materials

    VM0033-Methodology.md

    Complete parsed and structured version of the VM0033 methodology document:

    • Structured methodology text with proper formatting

    • All equations and calculation formulas

    • Parameter definitions and requirements

    • Tools and modules documentation

    • Methodology-specific guidance

    VM0033-Methodology_meta.json

    Metadata and structural information for the VM0033 methodology:

    • Document structure and sections

    • Equation references and locations

    • Parameter cross-references

    • Validation checkpoints

    vm0033-policy.json

    Complete Guardian policy implementation for VM0033:

    • Production-ready policy configuration

    • All workflow blocks and schema references

    • Role-based access controls and permissions

    • Complete integration with Guardian platform

    Test Data & Validation Artifacts

    VM0033_Allcot_Test_Case_Artifact.xlsx

    Official test case artifact developed with Verra using real Allcot ABC Mangrove project data:

    • Actual project parameters from implemented restoration activities

    • Real-world baseline emissions calculations

    • Documented project emissions calculations

    • Leakage calculations based on actual project conditions

    • Final emission reduction results for validation

    • Comprehensive test scenarios covering different wetland types and restoration activities

    ER_calculations_ABC Senegal.xlsx

    Real-world emission reduction calculations for ABC Senegal project:

    • Practical application of methodology calculations

    • Project-specific parameter values

    • Verification data points

    final-PDD-vc.json

    Complete Guardian Verifiable Credential containing VM0033 test data:

    • Purpose: Production Guardian VC format with complete VM0033 test case data

    • Contents: Baseline emissions, project emissions, leakage calculations, and net ERR

    • Usage: Testing Guardian policy calculations and validating customLogicBlock implementations

    • Size: 3.6MB with comprehensive test data structure

    • Integration: Direct input for Guardian dry-run mode and customLogicBlock testing

    PDD-VC-input.json

    Guardian VC input document for PDD submission testing:

    • Purpose: Template for Project Design Document submissions in Guardian

    • Contents: Complete PDD structure with VM0033 test data

    • Usage: Testing PDD submission workflows and schema validation

    • Integration: Compatible with Guardian requestVcDocumentBlock testing

    Guardian Tools & Implementation Code

    AR-Tool-14.json

    Complete Guardian Tool implementation for AR Tool 14 (CDM biomass calculations):

    • Purpose: Production Guardian Tool for tree and shrub biomass estimation

    • Architecture: Three-block pattern (extractDataBlock → customLogicBlock → extractDataBlock)

    • Calculations: Stratified random sampling, uncertainty management, discount factors

    • Integration: Mini-policy that can be called from parent methodologies like VM0033

    • Testing: Includes production JavaScript for all calculation scenarios

    ar-am-tool-14-v4.1.pdf

    Official CDM AR Tool 14 methodology document (32 pages):

    • Purpose: Original methodology specification for AR Tool 14 implementation

    • Contents: Complete methodology for biomass and carbon stock estimation

    • Usage: Reference document for understanding Guardian Tool implementation

    • Key Topics: Sampling methods, allometric equations, uncertainty assessment

    er-calculations.js

    Production JavaScript implementation of VM0033 emission reduction calculations:

    • Purpose: Real Guardian customLogicBlock code for VM0033 calculations

    • Contents: Baseline emissions, project emissions, leakage, and net ERR functions

    • Usage: Reference implementation for Guardian policy development

    • Testing: Validated against VM0033 test artifacts

    • Integration: Direct copy-paste into Guardian customLogicBlock

    Guardian Schema Templates

    PDD-schema.xlsx

    Project Design Document schema template for VM0033:

    • Purpose: Excel-first approach to Guardian schema development

    • Contents: Complete PDD structure with field types and validation rules

    • Usage: Import directly into Guardian for schema creation

    • Features: Guardian-compatible formatting, field key management

    Monitoring-schema.xlsx

    Monitoring Report schema template for VM0033:

    • Purpose: Time-series monitoring data schema for Guardian

    • Contents: Annual monitoring parameters with temporal data structures

    • Usage: Schema development for monitoring report submissions

    • Features: VVB verification workflow support, time-series calculations

    schema-template-excel.xlsx

    Standard Excel template for creating Guardian schemas:

    • Purpose: Base template for any Guardian schema development

    • Contents: Pre-configured field types and validation rules

    • Usage: Starting point for custom schema creation

    • Features: Guardian-compatible structure, import-ready format

    Development Tools

    excel_artifact_extractor.py

    Python tool for extracting and validating calculation data from Excel artifacts:

    • Purpose: Automated parameter extraction and validation

    • Features: Calculation validation, schema generation support, quality assurance checks

    Artifact Categories & Usage

    📊 For Testing & Validation

    • Use VM0033_Allcot_Test_Case_Artifact.xlsx for official test case validation

    • Use final-PDD-vc.json for Guardian customLogicBlock testing

    • Use ER_calculations_ABC Senegal.xlsx for real-world validation scenarios

    🛠️ For Implementation

    • Use er-calculations.js as reference for customLogicBlock development

    • Use AR-Tool-14.json as template for Guardian Tools development

    • Use vm0033-policy.json for complete Guardian policy reference

    📋 For Schema Development

    • Start with schema-template-excel.xlsx for any new schema

    • Use PDD-schema.xlsx and Monitoring-schema.xlsx for VM0033-specific schemas

    • Follow Excel-first approach documented in Part III

    🔍 For Documentation & Reference

    • Reference VM0033-Methodology.md for methodology understanding

    • Use ar-am-tool-14-v4.1.pdf for AR Tool implementation guidance

    • Check VM0033-Methodology_meta.json for structural metadata

    Guardian Testing Integration

    CustomLogicBlock Testing

    Dry-Run Mode Testing

    Guardian Tools Testing

    Artifact Validation Process

    Step 1: Calculation Verification

    1. Open relevant test artifact (e.g., VM0033_Allcot_Test_Case_Artifact.xlsx)

    2. Review all input parameters and their sources

    3. Verify calculation formulas match methodology requirements

    4. Validate intermediate calculation steps

    5. Confirm final emission reduction results

    Step 2: Guardian Implementation Testing

    1. Use schema templates to structure Guardian schemas

    2. Import test data from final-PDD-vc.json into Guardian policy

    3. Run Guardian calculations using er-calculations.js reference

    4. Compare Guardian outputs with Excel artifact results

    5. Verify calculations match within acceptable tolerance

    Step 3: Production Validation

    1. Use vm0033-policy.json for complete policy testing

    2. Test with real project data from ER_calculations_ABC Senegal.xlsx

    3. Validate Guardian policy produces expected emission reductions

    4. Confirm integration with Guardian Tools like AR Tool 14

    Quality Assurance Standards

    Validation Criteria

    ✅ Calculation Accuracy: All calculations must match methodology requirements exactly ✅ Guardian Compatibility: All artifacts tested with Guardian platform ✅ Production Ready: Code and configurations validated in production environment ✅ Documentation Complete: All artifacts include usage instructions and validation results

    File Integrity

    • All JSON files validated for proper formatting

    • All Excel files tested for calculation accuracy

    • All JavaScript code tested in Guardian environment

    • All PDF documents verified for completeness

    Integration with Handbook Parts

    Part III (Schema Design)

    • Use schema templates for consistent schema development

    • Reference PDD and Monitoring schema examples

    • Follow Excel-first approach patterns

    Part IV (Policy Workflow)

    • Reference vm0033-policy.json for production workflow patterns

    • Use AR Tool integration examples for Guardian Tools

    Part V (Calculation Logic)

    • Use er-calculations.js for customLogicBlock implementation

    • Reference AR-Tool-14.json for Guardian Tools development

    • Test with final-PDD-vc.json for validation

    Common Usage Patterns

    For New Methodology Implementation

    1. Start with schema-template-excel.xlsx for schema design

    2. Reference VM0033-Methodology.md for methodology understanding

    3. Use er-calculations.js patterns for calculation implementation

    4. Validate against test artifacts like VM0033_Allcot_Test_Case_Artifact.xlsx

    For Guardian Tools Development

    1. Study AR-Tool-14.json for three-block pattern implementation

    2. Reference ar-am-tool-14-v4.1.pdf for methodology understanding

    3. Follow extractDataBlock → customLogicBlock → extractDataBlock pattern

    4. Test integration with parent policies

    For Testing & Validation

    1. Use Guardian's dry-run mode with policy artifacts

    2. Test customLogicBlocks with final-PDD-vc.json input

    3. Validate results against Excel test artifacts

    4. Compare with real-world project data


    Complete Artifact Collection: This collection provides everything needed for Guardian methodology digitization, from initial schema design through production deployment and testing.

    Regular Updates: Artifacts are continuously updated based on Guardian platform evolution and methodology refinements. Always use the latest versions for development.

    Production Validation Required: While all artifacts are tested, always validate in your specific Guardian environment before production deployment.

    The Block-Event Architecture

    Guardian policies work through workflow blocks that communicate via events. When a user submits a document, completes a calculation, or makes an approval decision, these actions trigger events that flow to other blocks, creating automated workflows.

    VM0033 demonstrates this perfectly. For instance - when a Project Developer submits a PDD using a requestVcDocumentBlock, it triggers events that:

    • Refresh the document grid for Standard Registry review

    • Update project status to "Waiting to be Added" (Listing process)

    • Enable VVB assignment workflow once registry accepts the listing

    Workflow Block Categories

    Guardian provides four main block categories:

    Data Input and Management: Collect and store information

    • requestVcDocumentBlock: Generate forms from your Part III schemas

    • sendToGuardianBlock: Save documents to database or Hedera blockchain

    • interfaceDocumentsSourceBlock: Display document grids with filtering capabilities

    Logic and Calculation: Process and validate data

    • customLogicBlock: Execute JavaScript or Python calculations for emission reductions

    • documentValidatorBlock: Validate data against your methodology rules

    • switchBlock: Create conditional workflow branches

    Token and Asset Management: Handle credit issuance - retirement lifecycle

    • mintDocumentBlock: Issue VCUs(tokens) based on verified emission reductions or removals

    • tokenActionBlock: Transfer, retire, or manage existing tokens

    • retirementDocumentBlock: Permanently remove tokens from circulation

    Container and Navigation: Organize user experience

    • interfaceContainerBlock: Create tabs, steps, and layouts

    • policyRolesBlock: Manage user role assignment

    • buttonBlock: Add custom actions and state transitions

    From Part III Schemas to Policy Workflows

    Your schemas become the foundation for workflow automation. Here's how they connect:

    Schema UUID Integration

    Each schema from Part III has a unique UUID that becomes a reference in policy blocks:

    That schema UUID (#9122bbd0-d96e-40b1-92f6-7bf60b68137c) is your PDD schema from Part III. Guardian automatically generates a form with all your schema fields, validation rules, and input types.

    Field Key Mapping

    Schema field keys become variables in calculation blocks:

    Validation Rule Translation

    Schema validation rules automatically enforce data quality:

    • Required fields become mandatory form inputs

    • Number ranges become input validation

    • Enum values become dropdown selections

    • Pattern matching ensures data format consistency

    Role-Based Workflow Design

    Environmental methodologies require clear stakeholder separation. Guardian implements this through role-based access control:

    Standard Stakeholder Roles

    OWNER (Standard Registry)

    • Manages the overall certification program and policy

    • Approves VVBs and validates projects

    • Authorizes token minting(issuance)

    • Review all documentation received from developer or VVB, request clarifications

    • Maintains audit trails and program integrity

    Project_Proponent (Project Developer)

    • Submits PDDs and monitoring reports

    • Assigns VVBs for validation/verification

    • Receives carbon credits(minted tokens) upon successful verification

    • Tracks project status and documentation

    VVB (Validation and Verification Body)

    • Registers as independent auditor

    • Validates project submissions

    • Verifies monitoring reports

    • Submits validation/verification reports

    Document Access Patterns

    Each role sees different views of the same data:

    Project Developers only see their own projects, while Standard Registry sees all projects for oversight. VVBs see projects assigned to them for validation/verification.

    Event-Driven Workflow Patterns

    Traditional workflows are linear: Step 1 → Step 2 → Step 3. Guardian workflows are event-driven, allowing flexible, responsive automation.

    Event Types and Flow

    RunEvent: Triggered when a block completes RefreshEvent: Updates UI displays and document grids TimerEvent: Time-based triggers for deadlines or schedules ErrorEvent: Handles validation failures and error recovery

    VM0033 shows sophisticated event patterns:

    When a Project Developer clicks "Add Project" (Button_0 output), it triggers the save_added block, which stores the project and refreshes the interface.

    Multi-Path Workflows

    Events enable conditional branching. A VVB's validation decision creates different event paths:

    This flexibility mirrors real-world certification processes where outcomes depend on validation results, not predetermined sequences.

    VM0033 Architecture Patterns

    VM0033's production policy demonstrates proven architecture patterns worth understanding:

    Three-Tier Stakeholder Design

    Tier 1: Registration and Setup

    • VVB registration and approval

    • Project listing and initial review

    • Role assignment and permissions setup

    Tier 2: Validation and Verification

    • Project validation workflows

    • Monitoring report submission and verification

    • Document review and approval processes

    Tier 3: Token Management and Audit

    • Emission reduction calculation and validation

    • VCU token minting based on verified results

    • Trust chain generation and audit trail creation

    Document State Management

    VM0033 tracks document states throughout the certification lifecycle:

    Each state transition triggers appropriate events, adds status values, notifications, and access control changes.

    Practical Implementation Strategy

    Reuse Rather Than Rebuild: Instead of creating policies from scratch, import existing policies like VM0033, remove their schemas, add your Part III schemas, and modify the workflow logic. This approach saves weeks of development time and provides proven workflow patterns as your foundation.

    To reuse VM0033: Import the policy → Delete existing schemas → Import your Part III schemas → Update schema IDs at relevant places with bulk find and replace → Modify token minting rules → Test with your data.

    Start with Document Flow

    Begin by mapping your methodology's document flow:

    1. What documents need submission? (PDD, monitoring reports)

    2. Who reviews each document? (Registry, VVBs)

    3. What approvals are required? (Validation, verification)

    4. When are tokens minted? (After verification approval)

    Schema Integration Planning

    Map your Part III schemas to workflow purposes:

    • PDD Schema: Project submission and validation workflow

    • Monitoring Schema: Ongoing reporting and verification workflow

    • Validation Report Schema: VVB validation documentation

    • Verification Report Schema: VVB verification documentation

    Calculation Integration Strategy

    Your Part III schemas contain calculation fields that become variables in customLogicBlock:

    The calculation results would feeds directly into schemas to be reviewed by VVB/Registry and later accessed via mintDocumentBlock for VCU issuance.

    Development Workflow

    Phase 1: Architecture Planning

    • Map stakeholder roles and permissions

    • Design document flow and state transitions

    • Plan event connections between workflow blocks

    • Identify calculation requirements and token minting rules

    Phase 2: Block Configuration

    • Configure data input blocks with Part III schemas

    • Set up calculation blocks with methodology formulas

    • Create container blocks for user interface organization

    • Connect blocks through event definitions

    Phase 3: Testing and Refinement

    • Test complete workflows with sample data

    • Validate calculations against Part III test artifacts

    • Refine user interfaces and error handling

    • Optimize performance and user experience

    Key Takeaways

    Guardian's Policy Workflow Engine transforms static schemas into dynamic certification workflows. The event-driven architecture provides flexibility while maintaining audit trails and stakeholder separation.

    VM0033 offers a proven template for environmental methodology automation. Rather than building from scratch, leverage existing patterns and focus your effort on methodology-specific calculations and business rules.

    Part III schemas integrate seamlessly with policy workflows. Schema UUIDs become block references, field keys become calculation variables, and validation rules become workflow automation.


    Next Steps: Chapter 14 explores Guardian's 25+ workflow blocks in detail, showing step-by-step configuration for data collection, calculations, and token management using VM0033's production examples.

    Prerequisites Check: Ensure you have:

    Time Investment: ~25 minutes reading + ~60 minutes hands-on practice with Guardian policy architecture and planning

    # Usage examples
    python excel_artifact_extractor.py list-workbooks
    python excel_artifact_extractor.py extract-tabs VM0033_Allcot_Test_Case_Artifact.xlsx
    python excel_artifact_extractor.py extract-tab-content VM0033_Allcot_Test_Case_Artifact.xlsx "8.5NetERR"
    {
      "test_input_file": "final-PDD-vc.json",
      "validation_reference": "VM0033_Allcot_Test_Case_Artifact.xlsx",
      "implementation_code": "er-calculations.js"
    }
    {
      "policy_file": "vm0033-policy.json", 
      "test_documents": ["PDD-VC-input.json", "final-PDD-vc.json"],
      "validation_artifacts": ["VM0033_Allcot_Test_Case_Artifact.xlsx"]
    }
    {
      "tool_implementation": "AR-Tool-14.json",
      "methodology_reference": "ar-am-tool-14-v4.1.pdf",
      "integration_example": "vm0033-policy.json"
    }
    User Action → Block Processing → Event Trigger → Next Block → Workflow Progression
    {
      "blockType": "requestVcDocumentBlock",
      "schemaId": "#9122bbd0-d96e-40b1-92f6-7bf60b68137c",
      "uiMetaData": {
        "title": "Project Design Document",
        "description": "Submit your PDD for validation"
      }
    }
    // Your Part III schema field: "baseline_emissions_tCO2e"
    // Becomes a JavaScript variable in customLogicBlock:
    const baselineEmissions = document.baseline_emissions_tCO2e;
    const projectEmissions = document.project_emissions_tCO2e;
    const netReductions = baselineEmissions - projectEmissions;
    {
      "permissions": ["Project_Proponent"],
      "onlyOwnDocuments": true
    }
    {
      "source": "add_project",
      "target": "save_added",
      "input": "RunEvent",
      "output": "Button_0"
    }
    Validation Decision → Approved Path: Project Listing + Monitoring Setup
                      → Rejected Path: Developer Notification + Revision Request
    Draft → Submitted → Under Review → Approved/Rejected → Published → Minted
    // Emission reduction calculation using schema field keys
    const calculateEmissionReductions = (pddData, monitoringData) => {
      const baseline = pddData.baseline_emissions_total;
      const project = monitoringData.project_emissions_measured;
      const leakage = monitoringData.leakage_emissions_calculated;
    
      return baseline - project - leakage;
    };
    area_hectares_total - Clear units and scope

    G5, G182, data, value, amount - Generic and meaningless

    geographicLocationFile (file): Additional location documentation
    status (enum): Credit status (Active, Retired, etc.)
  • coreCarbonPrinciples (object): Core carbon principles compliance

  • Example of good code using relevant field names
    Property dropdown in schemas
    Required dropdown
    Schema JSON
    Schema UUID on bottom right

    Annual Data Collection: Time-series parameter measurements across project lifetime

  • Calculation Updates: Dynamic recalculation of emission reductions based on new monitoring data

  • Quality Control: Data validation and evidence documentation for verification activities

  • Temporal Relationships: Maintaining connections between annual data and cumulative results

  • Usually, there's always a section on methodology PDF(including VM0033) on data and parameters to be monitored. Typcially, those fields are submitted as part of Monitoring report.

    Subsection of Herbaceous Vegetation Stratum Data for Project in MR schema

    Building the Primary Monitoring Schema

    Step 1: Create Main Monitoring Schema Header

    Start your monitoring Excel file with the main schema structure:

    This establishes the monitoring schema as a Verifiable Credentials type that will create on-chain records for each monitoring submission.

    Step 2: Add Temporal Framework Fields

    The first fields should establish the temporal context for monitoring data:

    These fields establish when the monitoring data was collected and create unique identifiers for each monitoring period.

    Step 3: Add Monitoring Period Input Structure

    Create the main monitoring data collection framework:

    This references a sub-schema containing the detailed monitoring parameter collection fields.

    Step 4: Create Monitoring Period Inputs Sub-Schema

    Create a new worksheet "(New) Monitoring Period Inputs" with the monitoring parameter structure:

    Monitoring Period Inputs Sheet

    Implementing Stratum-Level Data Collection

    Creating Stratum Monitoring Sub-Schemas

    For methodologies with multiple strata like VM0033, create stratum-specific monitoring:

    Create "(New) MP Herbaceous Vegetat 1" worksheet(names are trimmed to accomodate excel's limitations):

    Adding Change Detection Fields

    Monitor changes from baseline or previous periods:

    Annual Parameter Tracking Implementation

    Step 1: Create Annual Input Parameters Structure

    Add annual parameter collection capability:

    Step 2: Build Annual Inputs Sub-Schema

    Create "(New) Annual Inputs Parameters" worksheet:

    Step 3: Add Project Scenario Annual Parameters

    Create corresponding project emissions tracking:

    Create "(New) Annual Inputs Paramet 1" worksheet with project-specific parameters:

    Implementing Quality Control and Evidence Collection

    Adding Data Quality Indicators

    Include quality control fields in your monitoring schemas:

    Create "Data quality level (enum)" tab:

    Evidence Documentation Structure

    Add fields for verification evidence:

    Calculation Update Mechanisms

    Adding Calculation Fields

    Include fields that trigger calculation updates:

    Linking to PDD Parameters

    Ensure monitoring parameters connect to PDD estimates:

    Temporal Boundary Management

    Crediting Period Tracking

    Add fields to manage crediting periods:

    Create "Crediting period (enum)" tab:

    Historical Data References

    Enable access to previous monitoring data:

    VVB Verification Support Fields

    Adding Verification Workflow Fields

    Include fields supporting VVB verification activities:

    Create "Verification status (enum)" tab:

    Audit Trail Fields

    Maintain audit trail for verification:

    Advanced Monitoring Features

    Conditional Monitoring Based on PDD Selections

    Link monitoring requirements to PDD method selections:

    Multi-Year Averaging Fields

    For parameters requiring multi-year tracking:

    Uncertainty Quantification

    Add uncertainty tracking as required by methodology:

    Performance Optimization for Long-term Monitoring

    Efficient Data Structure Design

    Use sub-schemas to group related annual data:

    Create efficient annual data structure in "(New) Project Emissions Annual":

    Archive and Retrieval Planning

    Include fields supporting long-term data management:

    Testing Your Monitoring Schema

    Guardian provides built-in validation when importing Excel schemas and testing schema functionality through the UI.

    Validation Checklist for Monitoring Schemas

    Before deploying, verify:

    Important: Field Key Management for Monitoring Schemas

    Just like PDD schemas, Guardian generates default field keys when importing monitoring Excel schemas. This is especially important for monitoring schemas since they often have time-series calculations.

    After Import - Review and Rename Field Keys:

    1. Navigate to schema management in Guardian

    2. Open your imported monitoring schema for editing

    3. Review each field's "Field Key" property

    4. Rename keys for calculation-friendly monitoring code:

      • monitoring_year_t instead of G5

      • carbon_stock_current_period instead of carbonStockCurrentPeriod

      • emission_reduction_annual instead of emissionReductionAnnual

      • biomass_change_since_baseline instead of biomassChangeSinceBaseline

    Why This Matters for Monitoring: Time-series calculations rely heavily on clear field naming:

    Integration Testing with PDD Schema

    1. Test parameter name consistency between PDD and monitoring field keys

    2. Validate calculation updates when monitoring data changes

    3. Verify temporal relationship tracking works correctly

    4. Test VVB verification workflow with monitoring submissions

    5. Validate cumulative calculation accuracy over multiple periods

    Trigger Automatic Calculations

    • Monitoring data submission triggers emission reduction calculations

    • Updated results flow to token minting calculations

    • Quality control validation occurs before calculation updates

    Support Verification Processes

    • VVB receives monitoring data with evidence documentation

    • Verification decisions update project status and calculation eligibility

    • Approved monitoring data enables token issuance for the monitoring period

    Best Practices for Monitoring Schema Development

    Parameter Consistency: Ensure monitoring parameter names and units exactly match PDD schema definitions to enable proper calculation updates.

    Quality Control Integration: Include quality indicators and evidence fields for every critical measurement to support verification workflows.

    Performance Planning: It's important to design efficient sub-schema structures that maintain performance as historical monitoring data accumulates over project lifetimes.

    Temporal Logic: Plan temporal relationships carefully to support both period-specific and cumulative calculations across crediting periods.

    Evidence Management: Include appropriate file upload and documentation fields to support verification requirements and audit trail maintenance.

    VVB Workflow Design: Design verification support fields that enable efficient VVB review and approval processes without overwhelming interfaces.

    VM0033 Monitoring schema
    Target Audiences

    Verra and Other Standards Organizations

    • Maintain and update existing digitized methodologies

    • Ensure compliance with evolving regulatory requirements

    • Optimize methodology performance and user experience

    Methodology Developers and Carbon Market Professionals

    • New to Guardian ecosystem seeking to digitize methodologies

    • Environmental consultants expanding into digital MRV

    • Carbon project developers wanting to understand the digitization process

    Technical Implementers

    • Developers working on Guardian-based solutions

    • System integrators connecting Guardian with external systems

    • QA teams testing methodology implementations

    Regulatory and Compliance Teams

    Key Features and Benefits

    Complete Process Coverage: From initial PDF analysis to production deployment with VM0033 digitization example throughout.

    Features
    Description

    Comprehensive Coverage

    • Complete process from PDF analysis to deployment • Real examples from VM0033 implementation • Practical focus with actionable steps • Best practices from successful digitizations

    Why VM0033?

    • 135-page methodology that covers most challenges • Active use in blue carbon projects • Guardian policy being used by Verra in production • Built in collaboration with Verra & Allcot with real project data and testing

    Streamlined Structure

    • 27 focused chapters across 8 parts • 20-30 hours total reading time • Practical, hands-on approach throughout • Reduced complexity while maintaining comprehensive coverage

    Handbook Structure and Flow

    Total Time Investment: 20-30 hours for complete reading

    Part I: Foundation (Chapters 1-3) - 20-30 minutes

    Purpose: Establish understanding of methodology digitization and Guardian platform Outcome: Clear comprehension of the digitization process and platform capabilities

    • Chapter 1: Introduction to Methodology Digitization

    • Chapter 2: Understanding VM0033 Methodology

    • Chapter 3: Guardian Platform Overview for Methodology Developers

    Part II: Analysis and Planning (Chapters 4-7) - 30-40 minutes

    Purpose: Systematic analysis of methodology documents and preparation for digitization Outcome: Complete understanding of methodology requirements and test artifacts

    • Chapter 4: Methodology Analysis and Decomposition

    • Chapter 5: Equation Mapping and Parameter Identification

    • Chapter 6: Tools and Modules Integration

    • Chapter 7: Test Artifact Development

    Part III: Schema Design and Development (Chapters 8-12) - 3-4 hours

    Purpose: Practical schema development and Guardian management features Outcome: Production-ready PDD and monitoring schemas with testing validation

    • Chapter 8: Schema Architecture and Foundations

    • Chapter 9: Project Design Document (PDD) Schema Development

    • Chapter 10: Monitoring Report Schema Development

    • Chapter 11: Advanced Schema Techniques (API management, Required types, UUIDs)

    • Chapter 12: Schema Testing and Validation Checklist

    Part IV: Policy Workflow Design and Implementation (Chapters 13-17) - 3-4 hours

    Purpose: Transform Part III schemas into complete Guardian policies with automated workflows Outcome: Production-ready policies with stakeholder workflows and token minting

    • Chapter 13: Policy Workflow Architecture and Design Principles

    • Chapter 14: Guardian Workflow Blocks and Configuration

    • Chapter 15: VM0033 Policy Implementation Deep Dive

    • Chapter 16: Advanced Policy Patterns and Testing

    • Chapter 17: Policy Deployment and Production Management

    Part V: Calculation Logic Implementation (Chapters 18-21) - 2-3 hours

    Purpose: Convert methodology equations into executable code and implement comprehensive testing Outcome: Production-ready calculation implementations with Guardian's testing framework

    • Chapter 18: Custom Logic Block Development

    • Chapter 19: Formula Linked Definitions (FLDs)

    • Chapter 20: Guardian Tools Architecture and Implementation

    • Chapter 21: Calculation Testing and Validation

    Part VI: Integration and Testing (Chapters 22-23) - 1-2 hours

    Purpose: End-to-end testing and API automation for production deployment Outcome: Production-ready methodology with testing coverage and API integration

    • Chapter 22: End-to-End Policy Testing - Multi-role testing, workflow validation, Guardian dry-run capabilities

    • Chapter 23: API Integration and Automation - Guardian APIs, automated workflows, virtual user management

    Part VII: Deployment and Maintenance (Chapters 24-26) - 5-8 hours

    Purpose: Deploy, monitor, and maintain methodology implementations Outcome: Operational methodology with ongoing support procedures

    • Chapter 24: User Management and Role Assignment - User roles, permissions, organization management

    • Chapter 25: Monitoring and Analytics - Guardian Indexer - Analytics, compliance reporting, audit trails

    • Chapter 26: Maintenance and Updates - Version management, bug fixes, regulatory changes

    Part VIII: Advanced Topics (Chapters 27-28) - 5-7 hours

    Purpose: Advanced integration techniques and troubleshooting Outcome: Expert-level understanding and problem-solving capabilities

    • Chapter 27: Integration with External Systems - Registry integration, monitoring systems, enterprise connectivity

    • Chapter 28: Troubleshooting and Common Issues - Debugging techniques, issue resolution, performance optimization

    Success Metrics

    For Standards Organizations

    • Reduced Maintenance Effort: 50-70% reduction in methodology update time

    • Improved Compliance: Automated audit trails and validation

    • Enhanced User Experience: Streamlined certification processes

    • Better Data Quality: Automated validation and error prevention

    For Methodology Developers

    • Faster Time-to-Market: 60-80% reduction in digitization time

    • Higher Quality: Comprehensive testing and validation procedures

    • Reduced Risk: Proven patterns and best practices from VM0033 implementation

    • Ongoing Support: Maintenance and update procedures

    For Technical Teams

    • Standardized Approach: Consistent methodology implementations

    • Reusable Components: Shared libraries and patterns

    • Quality Assurance: Comprehensive testing frameworks

    • Performance Optimization: Scalable, efficient implementations

    Prerequisites and Requirements

    Required Knowledge

    • Environmental Methodology Understanding: Familiarity with carbon markets and MRV concepts

    • JSON and Basic Programming: Ability to read and modify JSON configurations

    • Web Technologies: Basic understanding of web applications and APIs

    Optional but Helpful

    • JavaScript Experience: For advanced calculation logic implementation

    • Carbon Market Experience: For understanding business context and requirements

    Essential Setup

    • Guardian Platform Access: MGS or local open source setup for hands-on practice

    • VM0033 Methodology Document: Reference material for examples

    Getting Started

    Quick Navigation

    • 📋 Table of Contents - Complete handbook overview with reading time estimates

    • 📝 Chapter Outlines - Detailed descriptions of all chapters and topics

    • 🏗️ Part I: Foundation and Preparation - Start your learning journey here (Available Now)

    • 🔍 Part II: Analysis and Planning - Systematic methodology analysis techniques (Available Now)

    • 🏗️ - Schema development and testing (Available Now)

    • ⚙️ - Complete policy workflow development (Available Now)

    • 🧮 - CustomLogicBlock development, Guardian Tools, and testing (Available Now)

    • 🔗 - End-to-end testing, API integration, and production deployment validation (Available Now)

    • 🚀 Part VII: Deployment and Maintenance - User management, monitoring, and maintenance procedures (In Progress)

    • ⚡ - External integration and troubleshooting (In Progress)

    Available Content

    Parts I-VI are now available with all twenty-three chapters complete and ready for use, covering the complete foundation through production deployment and API integration.

    Part
    Status
    Chapters
    Description

    Part I

    ✅ Available

    Foundation concepts, VM0033 overview, Guardian platform introduction

    Part II

    ✅ Available

    Methodology analysis, equation mapping, tools integration, test artifacts

    Part III

    ✅ Available

    Shared Resources

    • 🔧 Shared Resources - Templates, integration guides, and reference materials

    • 📄 Templates - Standardized chapter and section templates

    • 🔗 VM0033 Integration - VM0033-specific integration system


    This handbook represents the collective knowledge and experience of the Guardian community, with special thanks to the Verra and Allcot team for their collaboration on the VM0033 implementation that serves as our primary example throughout this guide.

    Integration Capabilities: Technical mechanisms for embedding methodology logic within broader certification workflows

  • Implementation Framework: Systematic approach to transforming methodology documents into functional digital systems

  • Guardian Architecture for Methodologies

    Guardian's microservices architecture provides the technical foundation needed to handle the computational and organizational complexity of advanced environmental methodologies like VM0033 at production scale.

    Core Technical Components:

    Service Architecture:

    • guardian-service: Central orchestration service managing policy execution and business logic

    • policy-service: Workflow execution engine that processes methodology-specific rules and requirements

    • worker-service: Dedicated calculation processing service handling intensive computational tasks

    • api-gateway: External integration hub providing secure interfaces for data exchange and validation

    • frontend: Multi-role user interface system supporting complex stakeholder interactions

    Architecture Benefits for Complex Methodologies:

    • Computational Scalability: Distributed processing handles simultaneous calculation across multiple carbon pools, thousands of monitoring points, and multi-decade time series

    • Stakeholder Complexity: Service separation enables tailored interfaces and access control for diverse stakeholder types (project developers, validators, registries, technical experts)

    • Reliability at Scale: Microservices isolation ensures that processing intensive calculations doesn't impact user interface responsiveness or data integrity

    • Integration Flexibility: Modular design supports integration with external validation systems, monitoring equipment, and third-party calculation tools

    Integration Capabilities:

    • Hedera Hashgraph: Immutable record-keeping

    • IPFS: Decentralized document storage

    • External APIs: Data validation and verification

    • Result: VM0033's extensive documentation, monitoring data, and verification records stored in tamper-proof, auditable formats

    See Guardian architecture for detailed technical specifications.

    Policy Workflow Engine Fundamentals

    The Policy Workflow Engine (PWE) is Guardian's core innovation, transforming certification processes into dynamic, executable workflows for environmental asset creation and verification.

    Complexity Consideration: VM0033 methodology contains intricate decision trees and calculation procedures requiring careful mapping to Guardian's workflow blocks for complete compliance.

    Core PWE Concept: Environmental certification processes are sophisticated workflows where methodology-specific requirements (like VM0033's carbon accounting) are embedded within broader certification procedures involving multiple stakeholders, decision points, data collection, calculations, and verification steps.

    PWE Components for VM0033:

    Workflow Block Types:

    • Container Blocks: Organize processes into logical groupings

    • Step Blocks: Guide users through sequential procedures

    • Calculation Blocks: Handle mathematical operations for carbon accounting

    • Request Blocks: Manage extensive data collection requirements

    VM0033 Certification Process Implementation:

    • Embedded Decision Logic: VM0033's baseline determination integrated into broader project approval workflows

    • Automated Compliance: VM0033's monitoring requirements embedded within ongoing certification processes

    • Integrated Calculations: VM0033's carbon accounting procedures automated within broader verification workflows

    • Complete Process Management: From project registration through credit issuance with embedded VM0033 compliance

    Automated Compliance Features:

    • Requirement Enforcement: Users cannot proceed without necessary data/validation

    • Consistency: Ensures uniform implementation across different projects

    • Error Reduction: Automated validations reduces data input mistakes

    • Format Validation: Specific data formats and approval workflows matching VM0033 requirements

    Stakeholder Coordination:

    • Role-based Workflows: Different interfaces and permissions for each stakeholder type

    • Governance Compliance: Meets VM0033's complex governance requirements

    • Tailored Tools: Each stakeholder gets specific tools and information needed

    See workflow blocks for complete component reference.

    Schema System and Data Management

    Guardian's schema system provides the foundation for structured data management, defining data structures, validation rules, and relationships that ensure methodology compliance and enable automated processing.

    Schema Architecture:

    System vs. Custom Schemas:

    • System Schemas: Core platform functionality

    • Custom Schemas: Methodology-specific data (PDD, MR, Project & Baseline emissions within them etc.)

    VM0033 Data Requirements:

    • Project boundaries and baseline conditions

    • Monitoring results and stakeholder information

    • Calculation parameters with specific validation requirements

    • Complex relationships between data elements

    Key Capabilities:

    Verifiable Credentials Integration:

    • Purpose: Extensive documentation and verification requirements

    • Features: Cryptographically signed, tamper-proof, independently verifiable

    • Applications: Project activities, monitoring results, stakeholder qualifications

    Time-Series Data Support:

    • Monitoring Requirements: Regular carbon stock, project activity, environmental condition monitoring

    • Time Spans: Decades-long project lifetimes

    • Validation: Built-in rules ensuring consistent formats and methodology compliance

    • Storage: Optimized for long-term analysis and verification

    Calculation Support:

    • Complex Equations: Carbon stock changes, emission factors, uncertainty calculations

    • Input Management: Ensures necessary data available in correct formats

    • Result Storage: Structured formats supporting audit and verification

    • Automated Processing: Enables sophisticated mathematical operations

    Schema Versioning:

    • Long-term Projects: Decades-long implementations with evolving methodology requirements

    • Historical Data: Remains accessible and valid across versions

    • Migration Support: Smooth transitions to updated methodology versions

    • Compliance: Maintains validity across methodology evolution

    See schema system and available schema types for detailed specifications.

    Blockchain Integration and User Management

    Guardian's integration with Hedera Hashgraph provides immutable record-keeping essential for environmental asset verification and trading, ensuring all methodology implementation activities are recorded in tamper-proof, publicly auditable formats.

    Production Validation: VM0033's Guardian implementation successfully deployed in production, demonstrating platform capability to handle complex, real-world methodology requirements at scale.

    Blockchain Integration Components:

    Hedera Network Integration:

    • Automatic Handling: VM0033's extensive audit trail requirements

    • Recorded Elements: Document submissions, calculation results, verification decisions, token transactions

    • Cryptographic Proof: Authenticity and timing for comprehensive audit trails

    • Market Confidence: Environmental asset markets require this level of transparency

    User Management System:

    • Stakeholder Support: Complex ecosystems typical of environmental methodologies

    • VM0033 Roles: Project developers, technical reviewers, independent validators, registry operators

    • Access Control: Roles and permissions system ensures appropriate access/permissions

    Role-Based Access Control:

    • Project Developers: Submit documentation/monitoring data, cannot approve own submissions

    • Validators: Review/approve project activities, cannot modify project data

    • Registry Operators: Oversee entire process, cannot interfere with independent validation

    • Enforcement: Automatic separation of duties

    IPFS Integration:

    • Storage: Project design documents, monitoring protocols, verification reports, supporting evidence

    • Verification: Cryptographic hashes recorded on Hedera

    • Long-term Access: Documentation remains accessible/verifiable over environmental project lifetimes

    User Interface Components:

    • Workflow Support: Project registration, document submission, data entry, calculation review, verification

    • Design: Consistent, intuitive interfaces with methodology-specific flexibility

    • Coordination: Notification systems for stakeholder coordination across extended time periods with audit trails

    See Hedera integration for detailed technical specifications.

    Mapping VM0033 Complexity to Guardian Capabilities

    VM0033 methodology demonstrates how Guardian's flexible architecture accommodates sophisticated environmental methodologies through systematic capability mapping.

    Certification Process → Guardian Implementation:

    Project Eligibility (VM0033 Embedded):

    • Certification Process: Complete project registration workflow with embedded VM0033 applicability requirements

    • Guardian: Conditional logic blocks + validation schemas + request blocks ensuring both general certification and VM0033-specific eligibility

    • Result: Automated certification process where only projects meeting both general standards and VM0033 requirements can proceed

    Baseline Assessment (VM0033 Embedded):

    • Certification Process: Baseline determination as part of broader project validation workflow with embedded VM0033 decision trees

    • Guardian: Switch blocks + calculation containers within broader validation workflows

    • Result: Automated certification process where VM0033 baseline requirements are seamlessly integrated into project approval workflows

    Monitoring Requirements:

    • VM0033: Regular measurement of carbon stocks, project activities, environmental conditions with variable frequencies

    • Guardian: Time-based workflow blocks + data collection schemas + timer blocks + conditional logic

    • Result: Automated monitoring enforcement according to methodology specifications

    Calculation Procedures:

    • VM0033: Complex equations for carbon stock changes, emission factors, uncertainty analysis across multiple pools

    • Guardian: Calculation blocks + mathematical add-ons with full audit trails

    • Result: Sophisticated mathematical operations with complete input/result tracking

    Verification Requirements:

    • VM0033: Independent review of documentation, monitoring data, calculation results with stakeholder independence

    • Guardian: Multi-signature blocks + role-based approval workflows

    • Result: Automated verification compliance with appropriate stakeholder involvement

    Credit Issuance (VM0033 Embedded):

    • Certification Process: Complete credit issuance workflow incorporating VM0033's calculation procedures, buffer requirements, and metadata within broader registry standards

    • Guardian: Token blocks + minting procedures ensuring compliance with both registry standards and VM0033 methodology requirements

    • Result: Automated credit issuance where VM0033 compliance is embedded within complete certification process

    Platform Capability Demonstration: Guardian's flexible architecture, comprehensive workflow blocks, and robust data management transform complete certification processes - with embedded methodology requirements like VM0033 - into automated, verifiable, auditable digital workflows that maintain full compliance while enabling efficient processing from project registration through credit issuance.


    Related Resources

    • Guardian Architecture - Detailed technical architecture

    • Policy Workflow Blocks - Available workflow components

    • Schema System - Data structure management

    • Roles & Permissions - Stakeholder management

    • - Working examples, test cases, and validation tools

    • - Python tool for data extraction and validation

    Key Capabilities Covered

    • Guardian's microservices architecture for methodology complexity

    • Policy Workflow Engine for automated compliance

    • Schema system for structured data management

    • Blockchain integration for immutable records

    • VM0033 complexity mapping to Guardian features

    Part I Complete: You now have the complete foundation needed for methodology digitization - conceptual understanding, domain knowledge, and technical platform capabilities. You're ready to begin systematic methodology analysis in Part II.

    Chapter 9: Project Design Document (PDD) Schema Development

    This chapter teaches you how to build Guardian schemas step-by-step for PDD implementation. You'll learn the exact field-by-field process used for VM0033, translating methodology analysis from Part II into working Guardian schema structures.

    By the end of this chapter, you'll know how to create the VM0033 PDD schema like structure yourself, understanding each Guardian field type, conditional logic implementation, and how methodology parameters become functional data collection forms.

    Guardian Schema Development Process

    Complex Guardian schemas can be built using Excel templates that define the data structure, and then imported into Guardian. The schema template shows all available field types and their configuration options.

    Alternative Schema Building Methods:

    • Excel-first approach (recommended for complex methodologies): Design in Excel, then import - covered in this chapter

    • Guardian UI approach: Build directly in Guardian interface - see Creating Schemas Using UI

    Excel-first approach also enables easier collaboration with carbon domain experts and non-technical stakeholders to provide better feedback with back-and-forth when schemas are complex.

    Schema Template Structure

    Every Guardian schema follows this Excel structure:

    Required Field
    Field Type
    Parameter
    Visibility
    Question
    Allow Multiple Answers
    Answer

    Field Configuration Meaning:

    • Required Field: Whether users must complete this field before submission

    • Field Type: Data type (String, Number, Date, Enum, Boolean, Sub-Schema, etc.)

    • Parameter: Reference to enum options or calculation parameters

    • Visibility: Field display conditions (TRUE=always visible, FALSE=hidden unless condition met)

    Building the Primary Schema Structure

    Let's build a PDD schema step-by-step, starting with the main schema definition like VM0033's "Project Description (Auto)" tab.

    Step 1: Create Main Schema Header

    Start your Excel file with these header rows:

    This establishes your schema as a Verifiable Credentials type that Guardian will process into on-chain records.

    Step 2: Add Certification Pathway Selection

    The first functional field should be your primary conditional logic driver. For VM0033, this is certification type selection:

    This creates an enum field that determines which additional requirements appear. The parameter reference "Choose project certific (enum)" points to a separate enum tab defining the options.

    Create the Enum Tab: Add a new worksheet named "Choose project certific (enum)" with(sheet names might be trimmed to accomodate excel's limitations):

    Step 3: Add Conditional Sub-Schemas

    Based on the certification selection, different sub-schemas should appear. Add conditional schema references:

    The VCS sub-schema always appears (TRUE visibility), while CCB appears only when CCB certification is selected (FALSE = conditional visibility based on enum selection).

    Step 4: Create Sub-Schema Structures

    VCS Project Description Sub-Schema

    Create a new worksheet "VCS Project Description v4.4" with basic project information:

    CCB Sub-Schema (Conditional)

    Create "CCB" worksheet for additional community/biodiversity requirements:

    Implementing Project Information Fields

    Geographic Data Capture

    Add geographic information fields to your main schema or sub-schema:

    Create the unit selection enum tab "AcresHectares (enum)":

    Project Timeline Fields

    Adding Methodology-Specific Parameters

    Now translate your Part II parameter analysis into Guardian fields. For VM0033's biomass parameters:

    Step 1: Add Parameter Collection Fields

    Step 2: Add Calculation Method Selection

    Create the method enum tab:

    Step 3: Add Method-Specific Parameter Fields

    Add conditional fields that appear based on method selection:

    These fields have FALSE visibility, meaning they appear conditionally based on the method selection enum.

    Integrating AR Tools and External Modules

    Adding AR Tool Integration

    VM0033 uses AR Tool 14 for biomass calculations. Add tool integration:

    Create AR Tool Sub-Schema

    Create "AR Tool 14" worksheet for tool-specific parameters:

    Implementing Baseline and Project Calculations

    Baseline Scenario Fields

    Create a sub-schema for baseline emissions:

    Create "(New) Final Baseline Emissions" worksheet:

    Project Emissions Structure

    Similarly create project emissions calculation fields:

    Advanced Field Types and Features

    Auto-Calculate Fields

    For calculated results that update automatically:

    File Upload Fields

    For evidence documentation:

    Help Text Fields

    Add contextual guidance:

    Hidden Fields for System Use

    Conditional Logic Implementation

    Simple Conditional Visibility

    Use TRUE/FALSE in the Visibility column:

    • TRUE: Always visible

    • FALSE: Visible only when referenced condition is met

    • Hidden: Never visible to users (system fields)

    Complex Conditional Logic

    For multiple conditions, Guardian evaluates enum selections to determine field visibility. The FALSE visibility fields become visible when their referenced enum is selected appropriately.

    Quality Control and Validation

    Required Field Validation

    Use "Yes" in Required Field column to enforce completion:

    Data Type Validation

    Guardian automatically validates based on Field Type:

    • Number: Only accepts numeric values

    • Date: Validates date format (2000-01-01)

    • Email: Validates email format

    • URL: Validates URL format

    Pattern Validation

    For custom validation patterns:

    Testing Your Schema Structure

    Validation Checklist

    Before importing to Guardian, verify:

    Import Testing and Schema Refinement

    1. Save Excel file with proper structure

    2. Import to Guardian

    3. Test conditional logic with different selections

    4. Validate auto-calculate fields

    Important: Field Key Management

    When Guardian imports Excel schemas, it generates default field keys that may not be meaningful for calculation code. For example:

    • Excel field "Biomass density (t d.m. ha⁻¹)" becomes field key "G5" as per excel cell it was found in

    • Default keys make calculation code harder to read and maintain

    Best Practice: After import, open the schema in Guardian UI to rename field keys:

    1. Navigate to schema management in Guardian

    2. Open your imported schema for editing

    3. Review each field's "Field Key" property

    4. Rename keys to be calculation-friendly:

    Why This Matters: Meaningful field keys make calculation code much easier to write and maintain:

    Connecting to Monitoring Schemas

    Your PDD schema establishes the foundation that monitoring schemas build upon. Key connections:

    Parameter Continuity

    Ensure PDD parameters have corresponding monitoring equivalents:

    • PDD: Initial biomass density estimate

    • Monitoring: Annual biomass density measurements

    Calculation Consistency

    Use same parameter names and calculation approaches:

    • PDD parameter key: biomass_density_initial

    • Monitoring parameter key: biomass_density_year_t

    Conditional Logic Alignment

    Method selections in PDD should drive monitoring parameter requirements:

    • Direct method PDD → Direct measurement monitoring fields

    • Indirect method PDD → Indirect calculation monitoring fields

    Best Practices Summary

    Start Simple: Begin with basic project information, then add complexity systematically.

    Test Incrementally: Validate each section before adding the next level of complexity.

    Use Sub-Schemas: Break complex sections into manageable sub-schema components.

    Plan Conditionals: Design conditional logic to reduce user interface complexity while maintaining requirement coverage.

    Link to Analysis: Every parameter should trace back to specific methodology requirements from Part II analysis.

    Validate with Stakeholders: Test schema workflows with actual Project Developers and VVBs before production deployment.

    The next chapter builds on this PDD foundation to create monitoring schemas that handle time-series data collection and calculation updates over project lifetimes.

    Chapter 5: Equation Mapping and Parameter Identification

    After completing the analysis approach in Chapter 4, we faced the challenge of extracting all the mathematical components from VM0033's 130-page methodology. The document contained dozens of equations scattered across different sections, with complex dependencies between parameters that weren't always obvious. This chapter shares the recursive analysis approach we developed to systematically map every calculation and identify all required parameters.

    The recursive analysis technique works backwards from the final calculation goal to identify every single input needed. Instead of trying to read through equations linearly, we start with what we want to calculate and trace backwards until we reach basic measured values or user inputs. This approach ensured we didn't miss any dependencies and helped us understand how all the calculations fit together.

    Understanding the Recursive Analysis Approach

    When we first looked at VM0033's main equation, it seemed straightforward:

    NERRWE = GHGBSL - GHGWPS + FRP - GHGLK

    Where:

    • NERRWE = Net CO₂e emission reductions from the wetland project activity

    • GHGBSL = Net CO₂e emissions in the baseline scenario

    • GHGWPS = Net CO₂e emissions in the project scenario

    • FRP = Fire reduction premium (bonus for reducing fire risk)

    But each of these terms turned out to have its own complex calculations. GHGBSL alone involved multiple sub-calculations for different types of emissions, time periods, and restoration activities. We quickly realized we needed a systematic way to trace through all these dependencies.

    The Recursive Process We Used:

    1. Start with final goal: NERRWE (what we ultimately want to calculate)

    2. Identify direct dependencies: GHGBSL, GHGWPS, FRP, GHGLK

    3. For each dependency, repeat the process: What do we need to calculate GHGBSL?

    4. Continue until reaching basic inputs: Measured values, user inputs, or default factors

    This process revealed that calculating NERRWE for a mangrove project requires hundreds of individual parameters and intermediate calculations, many of which weren't obvious from just reading the methodology sequentially.

    Why This Approach Worked

    Comprehensive Coverage: Working backwards ensured we found every required input, even parameters that were buried deep in sub-calculations or referenced indirectly through multiple layers.

    Logical Implementation Order: Understanding dependencies helped us plan implementation sequence - we knew we needed basic measurements before intermediate calculations, and intermediate calculations before final results.

    Error Prevention: The dependency mapping showed us where validation should happen at each step, rather than only discovering problems at the final calculation stage.

    Parameter Classification System

    As we traced through VM0033's calculations, we realized we needed to organize the hundreds of parameters we were discovering. We developed a classification system that helped us understand what data users would need to provide and when.

    Parameter Categories We Used:

    Monitored Parameters

    These are values that project developers collect through field measurements or laboratory analysis. The Allcot ABC Mangrove project shows how these measurements connect to actual calculations:

    Tree Measurements: The project tracks baseline biomass (ABSL,i) and project biomass (AWPS,i) for each stratum. For example, Stratum 1 starts with 1149 t C/ha baseline biomass, while Stratum 3 has 2397 t C/ha - these differences required separate tracking because they feed into different calculation pathways.

    Soil Measurements: Soil sampling provides bulk density (BD), organic matter content (%OMsoil), and carbon content (%Csoil) that the recursive analysis revealed are needed for soil carbon change calculations. The project requires "stratum and horizon average" values since conditions vary within each restoration area.

    Site Conditions: Sediment accretion rates (SA) and ecosystem classifications affect growth projections and carbon accumulation calculations. The recursive analysis showed these seemingly simple inputs actually influence multiple calculation branches.

    Project Activity Data: Area measurements for each stratum (ranging from 1090 to 2222 hectares in the Allcot project) become critical because all carbon calculations get multiplied by area - missing or incorrect area data would invalidate all results.

    User-Input Parameters

    These are project-specific values that users provide during setup or periodically update:

    Project Description: Project area size, crediting period length, restoration activities planned, geographic location.

    Management Decisions: Choice of monitoring frequency, selection of calculation methods where VM0033 provides options, decisions about which optional calculations to include.

    Economic Data: Costs for fossil fuel use calculations (needed for AR-Tool05), labor and equipment information for project emission calculations.

    Default Values

    VM0033 provides standard values that can be used when site-specific measurements aren't available:

    Growth Factors: Default allometric equations for different mangrove species, default root-to-shoot ratios, standard wood density values.

    Emission Factors: Default factors for methane and nitrous oxide emissions, fossil fuel emission factors from AR-Tool05, decomposition rates for different organic matter types.

    Conversion Factors: Units conversions, carbon content factors, global warming potential values for different greenhouse gases.

    Calculated Parameters

    These values get computed from other parameters using VM0033's equations:

    Intermediate Calculations: Area-weighted averages across different project zones, annual growth increments, cumulative totals over time periods.

    Complex Dependencies: Parameters that depend on multiple inputs and conditional logic, such as eligibility determinations that vary based on site conditions and project activities.

    Building Parameter Dependency Trees

    The most challenging part of our recursive analysis was mapping how parameters depend on each other. Some dependencies were simple and direct, while others involved complex conditional logic or calculations that changed over time.

    Simple Dependencies: Many parameters have straightforward relationships. For example, total project carbon stock depends on individual tree biomass calculations, which depend on DBH measurements and species-specific allometric equations.

    Conditional Dependencies: VM0033 includes many calculations that only apply under certain conditions. Fire reduction premiums only apply if projects reduce fire risk. Methane emission calculations depend on whether soil stays flooded or gets drained.

    Time-Dependent Relationships: Many calculations change over time as trees grow and conditions change. We had to map not just what parameters were needed, but when they were needed and how they changed over the project lifetime.

    Dependency Mapping Process

    Visual Mapping: We created flowcharts and tree diagrams showing how parameters related to each other. This helped us see the big picture and identify where we might have missed connections.

    Calculation Sequences: We documented the order in which calculations need to happen, ensuring that required inputs are available before calculations that depend on them.

    Validation Points: The dependency trees showed us where to include validation checks - if a parameter fails validation, which calculations would be affected, and how to provide helpful error messages.

    Working Through VM0033's Key Calculations with Allcot Project Examples

    Let me walk through how we applied recursive analysis to VM0033's main calculation components, using the actual Allcot ABC Mangrove project to show how boundary decisions simplify the recursive analysis.

    Baseline Emissions (GHGBSL) Analysis

    The Allcot project made a key decision that simplified baseline calculations: "Does the project quantify baseline emission reduction? = False". This eliminated entire calculation branches from our recursive analysis.

    What This Decision Meant: Instead of calculating emissions from continued degradation, the project only claims benefits from restoration activities. This removed complex soil carbon loss calculations that would have required:

    • Peat depletion rates (not applicable - all mineral soil)

    • Soil organic carbon loss rates

    • Temporal boundary calculations (PDT and SDT both = 0)

    Simplified Baseline for Allcot: With mineral soil across all strata and no baseline emission reduction claims, the baseline scenario becomes straightforward - track existing biomass levels (1149, 2115, 2397, 1339 t C/ha across the four strata) without complex degradation modeling.

    Recursive Analysis Benefit: By starting with NERRWE and working backwards, we discovered early that the boundary decisions eliminated major calculation branches, allowing us to focus implementation effort on the actual requirements rather than building unused functionality.

    Project Emissions (GHGWPS) Analysis

    Project emissions include both the carbon benefits from restoration and any emissions caused by project activities.

    Carbon Benefits (Negative Emissions):

    • Tree Growth: Mangroves sequester carbon as they grow, calculated using AR-Tool14 equations

    • Soil Improvement: Restoration improves soil conditions, reducing carbon loss rates

    Project Activity Emissions (Positive Emissions):

    • Fossil Fuel Use: Boats, equipment, and transportation for project activities (calculated using AR-Tool05)

    • Disturbance Effects: Temporary emissions from site preparation activities

    Parameter Dependencies We Mapped:

    • Tree growth rates (species-specific, site conditions)

    • Fuel consumption for project activities (equipment types, distances, frequencies)

    • Soil improvement rates (depends on restoration techniques and site conditions)

    Tools Integration Through Recursive Analysis

    VM0033 references external tools (AR-Tool05, AR-Tool14, AFLOU) that have their own parameter requirements. Recursive analysis helped us understand how these tools fit into the overall calculation framework.

    Calculation Reference: See the complete equation mapping and parameter dependencies in our and available in the Artifacts Collection.

    AR-Tool14 for Biomass Calculations:

    • Inputs Required: Tree diameter measurements, species identification, site conditions

    • Outputs Provided: Above-ground and below-ground biomass estimates

    • Integration Point: Biomass outputs feed into project emission calculations

    AR-Tool05 for Fossil Fuel Emissions:

    • Inputs Required: Equipment types, fuel consumption rates, activity frequencies

    • Outputs Provided: CO₂ emissions from project activities

    • Integration Point: Fossil fuel emissions get added to project emission totals

    Handling Conditional Calculations and Alternative Methods

    VM0033 includes many situations where calculations depend on project-specific conditions or where multiple calculation methods are available. Our recursive analysis had to account for these variations, and the Allcot ABC Mangrove project provides concrete examples of how these decisions affect implementation.

    Allcot ABC Mangrove Project Boundary Decisions:

    From the project boundary analysis in our test artifact, the Allcot ABC Mangrove project made specific choices about what to include in calculations:

    Carbon Pools Included:

    • Above-ground tree biomass (CO₂): Included - This is the main carbon benefit from planting mangroves

    • Below-ground tree biomass (CO₂): Included - Root systems store significant carbon in mangrove restoration

    • Soil organic carbon: Excluded in baseline, Included in project - The project improves soil conditions over time

    Carbon Pools Excluded:

    • Litter and Dead Wood: Excluded - Methodology allows these to be optional for wetland restoration

    • Wood Products: Excluded - No harvesting planned in the mangrove restoration project

    • Non-tree Biomass: Excluded - Focus is on tree restoration, not herbaceous vegetation

    Greenhouse Gas Sources:

    • Methane (CH₄) from soil microbes: Excluded - Conservatively omitted to simplify calculations

    • Nitrous oxide (N₂O): Excluded - Also conservatively excluded

    • Fossil fuel emissions: Excluded - Mangrove planting doesn't require heavy machinery

    Quantification Approach Choices:

    The Allcot project made specific methodological choices that affected parameter requirements:

    Soil Carbon Approach: "Total stock approach" - This means comparing final soil carbon stocks rather than tracking annual loss rates Baseline Emission Reductions: False - The project doesn't claim benefits from stopping degradation, only from restoration activities NERRWE-max Cap: False - No maximum cap on annual credit generation Fire Reduction Premium: False - No fire risk reduction claimed (this removed all fire-related parameters from our implementation)

    Conditional Parameter Logic from Allcot Project

    Soil Type Conditions: All four strata in the Allcot project have "Mineral soil" type, which means:

    • Peat-related parameters (Depthpeat,i,t0, Ratepeatloss-BSL,I) are "Not applicable"

    • Soil disturbance parameters don't apply

    • Temporal boundary calculations are simplified (PDT = 0, SDT = 0 for all strata)

    Project Activity Dependencies: Since Fire Reduction Premium = False:

    • All fire-related emission factors are excluded

    • GWP factors for CH₄ and N₂O only needed if soil methane/nitrous oxide included

    • Burning emission calculations completely skipped

    Site-Specific vs. Default Values: The Allcot project required site-specific measurements for:

    • Soil bulk density (BD) - "User provide stratum and horizon average in the value applied field"

    • Soil carbon content (%OMsoil, %Csoil) - Collected through soil sampling data upload

    • Tree measurements for biomass calculations (ABSL,i and AWPS,i values)

    Implementation Simplifications

    Boundary Condition Benefits: The Allcot project's boundary choices significantly simplified our implementation:

    • No peat soil calculations needed (all mineral soil)

    • No fire premium calculations (eliminated ~15 parameters)

    • No wood product calculations (eliminated long-term storage complexity)

    • No fossil fuel tracking for project activities (simple planting operation)

    Monitoring Frequency: The project uses annual monitoring with field measurements for tree growth, avoiding the need for complex growth modeling between measurement periods.

    Stratum Management: Four distinct strata with different baseline biomass values (1149, 2115, 2397, 1339 t C/ha), each requiring separate parameter tracking but using the same calculation procedures.

    Managing Calculation Alternatives

    Implementation Strategy: Rather than trying to implement every possible variation initially, we focused on the most common approaches for mangrove restoration projects. This kept our initial implementation manageable while still meeting methodology requirements.

    Future Expansion: The dependency maps we created during recursive analysis provide roadmaps for adding additional calculation options later as needed.

    Creating Documentation and Validation Framework

    The recursive analysis process generated extensive documentation that became essential for both implementation and ongoing maintenance.

    Parameter Documentation: For each parameter we identified, we documented:

    • Definition and units

    • Data source (measured, user input, or default)

    • Validation requirements (ranges, formats, dependencies)

    • When it's used in calculations

    Calculation Flowcharts: We created visual diagrams showing how data flows through the calculation system from basic inputs to final results. These flowcharts helped us:

    • Verify our understanding of VM0033's requirements

    • Plan implementation sequence

    • Design user interfaces that collect information in logical order

    • Create validation checks at appropriate points

    Validation Logic: The dependency trees revealed exactly where validation should happen:

    • Input Validation: Check individual parameters as users enter them

    • Intermediate Validation: Verify calculated values make sense before using them in subsequent calculations

    • Final Validation: Confirm overall results are reasonable and meet methodology requirements

    Practical Lessons from VM0033 Implementation

    Start Simple, Build Complexity: We initially tried to map every possible calculation path in VM0033, which was overwhelming. It worked better to start with the most basic mangrove restoration scenario and add complexity gradually.

    Documentation is Critical: The recursive analysis generates a lot of information. We learned to document everything systematically because details that seemed obvious at the time became confusing weeks later during implementation.

    Test Understanding Early: We regularly tested our understanding by working through example calculations manually. This helped us catch misunderstandings in the recursive analysis before they became implementation problems.

    Plan for Iteration: Our first attempt at recursive analysis missed some dependencies and misunderstood some relationships. Building in time for multiple iterations helped us refine our understanding and improve the parameter mapping.

    From Parameter Mapping to Implementation Planning

    The recursive analysis and parameter identification work creates the foundation for the tool integration and test artifact development covered in the next chapters.

    Tool Integration Preparation: Understanding parameter dependencies helps identify which external tools are needed and how they integrate with methodology-specific calculations.

    Test Artifact Requirements: The complete parameter lists and calculation sequences become the basis for creating comprehensive test spreadsheets that validate implementation accuracy.

    Schema Design Foundation: Although schema design comes in Part III, the parameter classification and dependency mapping from this chapter directly informs what data structures and validation rules we'll need.


    Parameter Mapping Summary and Next Steps

    Mathematical Foundation Complete: You now understand the systematic approach we used to extract and organize all mathematical components from VM0033.

    Key Analysis Outcomes:

    Preparation for Chapter 6: The parameter dependencies and tool integration points identified in this chapter become the focus of Chapter 6, where we'll cover systematic integration of AR-Tool05, AR-Tool14, and AFLOU non-permanence risk tool.

    Real-World Application: While we used VM0033 as our example, the recursive analysis technique works for any methodology with complex calculations. The approach of starting from final results and working backwards systematically ensures comprehensive coverage regardless of methodology complexity.

    Implementation Reality: This recursive analysis work took several weeks during VM0033 digitization, but it prevented months of problems later by ensuring we understood all dependencies before starting implementation.

    Chapter 21: Calculation Testing and Validation

    Comprehensive testing and validation using Guardian's dry-run mode and testing framework with VM0033 and AR Tool 14 test artifacts

    This chapter demonstrates how to leverage Guardian's built-in testing capabilities to validate environmental methodology calculations. Using Guardian's dry-run mode, customLogicBlock testing interface, and our comprehensive VM0033 and AR Tool 14 test artifacts, you'll learn to validate calculations at every stage: baseline, project, leakage, and final net emission reductions.

    Learning Objectives

    After completing this chapter, you will be able to:

    • Utilize Guardian's dry-run mode for comprehensive policy testing

    • Use Guardian's customLogicBlock testing interface for debugging calculations

    • Validate calculations against methodology test artifacts at each stage

    • Test baseline emissions, project emissions, leakage, and net emission reductions

    • Debug calculation discrepancies using Guardian's built-in tools

    • Implement automated testing using Guardian's API framework

    • Create test suites using real methodology test data

    Prerequisites

    • Completed Chapters 18-20: Custom Logic Block Development, Formula Linked Definitions, and Guardian Tools Architecture

    • Access to test artifacts: , ,

    • Understanding of Guardian dry-run mode

    • Familiarity with Guardian testing interface

    Guardian's Built-in Testing Framework

    Why Guardian's Native Testing is Essential

    Environmental methodology calculations directly impact carbon credit credibility and market trust. Guardian provides comprehensive testing capabilities specifically designed for environmental methodologies:

    • Dry-run mode - Complete policy execution without blockchain transactions

    • CustomLogicBlock testing interface - Interactive testing and debugging

    • Virtual users - Multi-role workflow testing

    • Artifact tracking - Complete audit trail of calculations

    Policy Testing Hierarchy

    Our recommended testing framework supports multiple validation levels:

    1. CustomLogicBlock Testing - Individual calculation block validation using Guardian's testing interface

    2. Dry-Run Policy Execution - Complete workflow testing using dry-run mode

    3. Tool Integration Testing - AR Tool and other tool validations

    4. End-to-End Workflow Testing - Complete credit issuance workflows

    Working with VM0033 Test Artifacts

    VM0033 Test Case Artifacts

    Our methodology implementation includes comprehensive test artifacts extracted from the official VM0033 test spreadsheet:

    • - Complete Allcot test case with all calculation stages

    • - Complete Guardian Verifiable Credential with net ERR data and test calculations

    • - JavaScript implementation of emission reduction calculations

    Understanding VM0033 Test Data Structure

    The VM0033 test artifacts provide validation data for all calculation stages:

    Key Test Values from VM0033 Allcot Test Case:

    • Baseline Emissions: Multiple ecosystem types and emission sources

    • Project Emissions: Restoration activities and maintenance

    • Leakage: Market and activity displacement calculations

    • Net Emission Reductions: Final creditable emission reductions

    Using Guardian's CustomLogicBlock Testing Interface

    Interactive Testing and Debugging

    Guardian provides a powerful testing interface specifically designed for customLogicBlock validation. This interface allows you to test calculation logic independently without running the entire policy.

    Accessing the Testing Interface

    Following Guardian's testing documentation:

    1. Navigate to Policy Editor - Open your methodology policy in draft mode

    2. Select customLogicBlock - Click on the calculation block you want to test

    3. Enter Testing Mode - Click the "Test" button in the block configuration

    4. Configure Test Data - Use schema-based input, JSON editor, or file upload

    Testing Input Methods

    Guardian supports three primary input methods for testing:

    a. Schema-Based Input

    • Select a data schema from dropdown list

    • Dynamic form generated based on schema

    • Ideal for structured and guided input interface

    b. JSON Editor

    • Direct JSON-formatted data input

    • Best for advanced users needing precise control

    • Supports complex data structures

    c. File Upload

    • Upload JSON file containing test data

    • Must be well-formed JSON

    • Perfect for using our VM0033 test artifacts

    Testing VM0033 Calculations

    Step 1: Get the PDD VC generated after submitting the new project data

    Using our artifact, fill in the JSON input data

    Step 2: Execute Test

    1. Open CustomLogicBlock - Navigate to baseline calculation block in policy editor

    2. Upload Test Data - Use file upload method with baselineTestInput JSON

    3. Run Test - Execute the calculation

    4. Validate Results - Compare outputs against expected values from VM0033 spreadsheet

    Step 3: Using Debug Function

    Guardian provides a debug() function for calculation tracing:

    Debug output appears in the Logs tab of the testing interface.

    Testing with Guardian's Dry-Run Mode

    Complete Policy Workflow Testing

    Guardian's dry-run mode allows testing complete methodology workflows without blockchain transactions.

    Setting Up Dry-Run Mode

    1. Import Policy - Import your VM0033 policy configuration

    2. Enable Dry-Run - Change policy status from Draft to Dry-Run

    3. Create Virtual Users - Set up test users for different roles (Project Developer, VVB, Registry)

    4. Execute Workflow - Run complete credit issuance process

    Dry-Run Artifacts and Validation

    Guardian's dry-run mode provides comprehensive tracking:

    Transactions Tab

    View mock transactions that would be executed on Hedera:

    • Token minting transactions

    • Document publishing transactions

    • Schema registration transactions

    Artifacts Tab

    Review all generated documents:

    • PDD Verifiable Credentials

    • Monitoring Report VCs

    • Validation Report VCs

    • Verification Report VCs

    IPFS Tab

    Track files that would be stored in IPFS:

    • Policy configuration files

    • Schema definitions

    • Document attachments

    API-Based Testing Framework

    Automated Testing with Guardian APIs

    Guardian provides comprehensive APIs for automated testing workflows. Reference the .

    Setting Up Cypress Testing

    Dry-Run API Testing

    Key API endpoints for testing:

    Best Practices for Methodology Testing

    Test Data Management

    1. Use Real Test Cases - Always test against official methodology calculation spreadsheets

    2. Test All Calculation Paths - Validate baseline, project, leakage, and net ERR calculations

    3. Include Edge Cases - Test zero values, maximum values, and boundary conditions

    4. Maintain Test Data Versions - Version control test artifacts alongside policy changes

    Testing Approach

    1. Start with CustomLogicBlock Testing - Validate individual calculation functions first

    2. Progress to Dry-Run Testing - Test complete workflows with virtual users

    3. Validate Against Spreadsheets - Compare all outputs to methodology test cases

    4. Document Test Results - Maintain testing logs and validation reports

    Debugging Calculation Issues

    When calculations don't match expected results:

    1. Use Debug Functions - Add debug() statements to trace calculation steps

    2. Check Units and Conversions - Verify unit consistency across calculations

    3. Validate Input Data - Ensure test data matches spreadsheet exactly

    4. Review Intermediate Results - Break complex calculations into testable components

    Chapter Summary

    Our testing framework provides comprehensive capabilities for validating environmental methodology calculations:

    • CustomLogicBlock Testing Interface - Interactive testing and debugging with multiple input methods

    • Dry-Run Mode - Complete policy workflow testing without blockchain transactions

    • Test Artifact Integration - Validation against official methodology test cases

    • API Testing Framework - Automated testing using Guardian's REST APIs

    Key Testing Workflow

    1. Extract test data from methodology spreadsheets like VM0033_Allcot_Test_Case_Artifact.xlsx

    2. Test individual calculations using CustomLogicBlock testing interface

    3. Validate complete workflows using dry-run mode with virtual users

    4. Compare results against expected values from official test cases

    Next Steps

    This completes Part V: Calculation Logic Implementation. With comprehensive testing validation, your Guardian methodology implementations are ready for production deployment with confidence in calculation accuracy.

    References and Further Reading

    • VM0033 Test Artifacts - Complete test dataset for validation


    Chapter 22: End-to-End Policy Testing

    Testing complete methodology workflows across all stakeholder roles using Guardian's dry-run capabilities and VM0033 production patterns

    Part V covered calculation writing and testing within individual blocks. Chapter 22 takes you beyond component testing to validate entire methodology workflows. Using Guardian's dry-run mode and VM0033's multi-stakeholder patterns, you'll learn to test complete project lifecycles from PDD submission through VCU token issuance.

    Real-world methodology deployment demands testing workflows that span months of project activity, multiple stakeholder roles, and hundreds of documents. Guardian's dry-run system lets you simulate these workflows without blockchain costs or time delays.

    Multi-Role Testing Framework

    Chapter 27: Integration with External Systems

    Strategies for data exchange between Guardian and external platforms

    This chapter demonstrates two critical integration patterns for connecting Guardian policies with external environmental registry systems. You'll learn how to transform Guardian data for external platforms like Verra Project Hub and how to receive MRV data from external devices and systems.

    Integration Architecture Overview

    Guardian's policy workflow engine supports bidirectional integration with external systems through specialized workflow blocks and API endpoints. This enables Guardian to function as both a data provider and consumer in complex environmental certification ecosystems.

    Two Primary Integration Patterns:

    // With good field keys - self-documenting
    const totalEmissions = (
      data.biomass_density_stratum_i * data.area_hectares_stratum_i *
      data.carbon_fraction_tree * data.co2_conversion_factor
    );
    
    // With poor field keys - requires comments and documentation
    const totalEmissions = (
      data.field0 * data.field1 * data.field2 * data.G5
    ); // What calculation is this performing?
    Row 5: Yes | Object | | | Project Location | No |
    Row 10: Yes | Object | | | Carbon Credits | No |
    VM0033 Project → Standard Properties → Different Methodology
      GeographicLocation ✓
      AccountableImpactOrganization ✓
      CRU (Carbon Credits) ✓
      Validation Records ✓
    Row 5: None | String | | | Project website (optional) | No | https://example.com
    Row 8: Hidden | String | | Hidden | Internal project reference | No | PROJ-2024-001
    Row 3: Required | String | | | Project title | No | VM0033 Wetland Project
    Row 12: Auto Calculate | Number | | | Total emission reductions (t CO2e) | No | 150.5
    Row 10: AutoCalculate | Number | | | Baseline emissions annual (t CO2e) | No |
    Row 11: AutoCalculate | Number | | | Project emissions annual (t CO2e) | No |
    Row 12: AutoCalculate | Number | | | Net emission reductions (t CO2e) | No |
    // In custom logic block
    const baselineEmissions = calculateBaselineEmissions(data);
    const projectEmissions = calculateProjectEmissions(data);
    const netReductions = baselineEmissions - projectEmissions;
    
    // Assign to Auto Calculate fields
    outputs.baseline_emissions_annual = baselineEmissions;
    outputs.project_emissions_annual = projectEmissions;
    outputs.net_emission_reductions = netReductions;
    #5dcdd058-988e-4e9f-9347-8766597396db
    {
      "blockType": "requestVcDocumentBlock",
      "schemaId": "#5dcdd058-988e-4e9f-9347-8766597396db",
      "uiMetaData": {
        "title": "PDD Submission"
      }
    }
    Row 1: Monitoring Report (Auto)
    Row 2: Description | Monitoring period input parameters for measuring carbon stock changes and GHG emissions
    Row 3: Schema Type | Verifiable Credentials
    Row 4: Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
    Row 5: Yes | Number | | | Monitoring year | No | 7
    Row 6: Yes | Number | | | Monitoring period (years since project start) | No | 1
    Row 7: Yes | Date | | | Monitoring report submission date | No | 2000-01-01
    Row 8: Yes | String | | | Monitoring period identifier | No | MP-2024-01
    Row 9: Yes | (New) Monitoring Period Inputs | | | Monitoring Period Inputs | No |
    (New) Monitoring Period Inputs
    Description | Monitoring period input parameters for measuring carbon stock changes and GHG emissions
    Schema Type | Verifiable Credentials
    Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
    Yes | Boolean | | | Baseline Aboveground non-tree biomass | No | True
    No | (New) MP Baseline Herbaceous V | | | Baseline herbaceous vegetation monitoring data | Yes |
    Yes | Number | | | Monitoring year | No | 7
    Yes | (New) MP Herbaceous Vegetat 1 | | | Measurements for each stratum | Yes |
    (New) MP Herbaceous Vegetation Stratum Data for Project
    Description | Stratum-level herbaceous vegetation monitoring
    Schema Type | Sub-Schema
    Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
    Yes | String | | | Stratum number | No | 1
    Yes | Number | | | Carbon stock in herbaceous vegetation (t C/ha) - CBSL-herb,i,t | No | 1.5
    Yes | Number | | | Initial time T for measurement - Start_T (BSL) | No | True
    Yes | Number | | | Carbon stock at time T - CBSL-herb,i,(t-T) | No | 0.5
    Yes | Number | | | Change in carbon stock since last period | No | 0.2
    Yes | String | | | Explanation for significant changes | No | example
    Yes | Boolean | | | Data quality meets methodology requirements | No | True
    Yes | (New) Annual Inputs Parameters | | | Annual Inputs Parameters Baseline | No |
    (New) Annual Inputs Parameters Baseline
    Description | Annual input parameters for baseline calculations
    Schema Type | Sub-Schema
    Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
    Yes | Number | | | Area of stratum (ha) – Ai,t | No | 1
    Yes | Number | | | Change in baseline tree-biomass carbon stock (t CO₂-e yr⁻¹) – ΔCTREE_BSL,i,t | No | 1
    Yes | Number | | | CO₂ emissions from in-situ soil (t CO₂e ha⁻¹ yr⁻¹) – GHGBSL-insitu-CO₂,i,t | No | 1
    Yes | Number | | | Percentage of organic carbon loss from in-situ soil (%) – C%BSL-emitted,i,t | No | 1
    Yes | (New) Annual Inputs Paramet 1 | | | Annual Inputs Parameters Project | No |
    (New) Annual Inputs Parameters Baseline
    Description | Annual input parameters for project calculations
    Schema Type | Sub-Schema
    Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
    Yes | Number | | | Area of stratum (ha) – Ai,t | No | 1
    Yes | Number | | | Change in project tree-biomass carbon stock (t CO₂-e yr⁻¹) – ΔCTREE_WPS,i,t | No | 1
    Yes | Number | | | CO₂ emissions from project soil (t CO₂e ha⁻¹ yr⁻¹) – GHGWPS-insitu-CO₂,i,t | No | 1
    Yes | Number | | | Percentage of organic carbon in project soil (%) – C%WPS-soil,i,t | No | 1
    Yes | Enum | Data quality level (enum) | | Data quality level for this measurement | No | High
    Yes | String | | | Quality control procedures followed | No | example
    Yes | Image | | | Site photograph for verification | No | ipfs://05566a658a44c6f747b5f82a2de1e0bf
    Yes | String | | | GPS coordinates of measurement location | No | example
    Schema name | Monitoring Report (Auto)
    Field name | Data quality level for this measurement
    Loaded to IPFS | No
    High |
    Medium |
    Low |
    Yes | String | | | Measurement methodology used | No | example
    Yes | Date | | | Date of field measurement | No | 2000-01-01
    Yes | String | | | Personnel responsible for measurement | No | example
    No | String | | | Laboratory analysis results | No | example
    No | Image | | | Laboratory report scan | No | ipfs://05566a658a44c6f747b5f82a2de1e0bf
    No | Auto-Calculate | | | Updated baseline emissions (t CO2e) | No | 150.5
    No | Auto-Calculate | | | Updated project emissions (t CO2e) | No | 45.2
    No | Auto-Calculate | | | Net emission reductions this period (t CO2e) | No | 105.3
    No | Auto-Calculate | | | Cumulative emission reductions (t CO2e) | No | 850.7
    Yes | Number | | | Initial PDD estimate for comparison | No | 1
    Yes | Number | | | Variance from PDD estimate (%) | No | 5.2
    Yes | String | | | Explanation for variance | No | example
    Yes | Enum | Crediting period (enum) | | Crediting period | No | 1st period (0-10 years)
    Yes | Number | | | Year within current crediting period | No | 3
    Yes | Boolean | | | Final monitoring report for this period | No | False
    Schema name | Monitoring Report (Auto)
    Field name | Crediting period
    Loaded to IPFS | No
    1st period (0-10 years) |
    2nd period (10-20 years) |
    3rd period (20-30 years) |
    [continue as needed for methodology requirements]
    No | String | | Hidden | Previous monitoring report ID | No | example
    No | Number | | | Change since previous monitoring period | No | 2.5
    Yes | Boolean | | | Significant changes requiring explanation | No | False
    Yes | String | | | VVB assigned for verification | No | example
    No | Date | | | VVB site visit date | No | 2000-01-01
    No | Enum | Verification status (enum) | | Verification status | No | Under review
    No | String | | | VVB comments | No | example
    No | Boolean | | | Verification approved | No | False
    Schema name | Monitoring Report (Auto)
    Field name | Verification status
    Loaded to IPFS | No
    Under review |
    Approved |
    Requires revision |
    Rejected |
    No | String | | Hidden | Monitoring report version | No | v1.0
    No | Date | | Hidden | Last modification date | No | 2000-01-01
    No | String | | Hidden | Modification log | No | example
    No | Number | | FALSE | Direct measurement biomass (if direct method selected) | No | 1
    No | Number | | FALSE | Indirect calculation biomass (if indirect method selected) | No | 1
    Yes | Number | | | 3-year average carbon stock | No | 12.5
    Yes | Number | | | 5-year trend in carbon accumulation | No | 0.8
    Yes | String | | | Trend analysis explanation | No | example
    Yes | Number | | | Measurement uncertainty (%) | No | 5.0
    Yes | String | | | Uncertainty calculation method | No | example
    Yes | Number | | | Confidence interval lower bound | No | 10.2
    Yes | Number | | | Confidence interval upper bound | No | 14.8
    Yes | (New) Project Emissions Annual | | | Project Emissions Annual Data | No |
    (New) Project Emissions Annual
    Description | Annual project emissions data
    Schema Type | Sub-Schema
    Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
    Yes | Number | | | Year | No | 2024
    Yes | String | | | Data collector | No | example
    [Include only essential annual fields to maintain performance]
    No | String | | Hidden | Archive status | No | Active
    No | Date | | Hidden | Archive date | No | 2000-01-01
    No | Boolean | | Hidden | Available for new calculations | No | True
    // With good field keys - monitoring calculation
    const annualChange = data.carbon_stock_current_period - data.carbon_stock_previous_period;
    const cumulativeER = data.emission_reduction_total + annualChange;
    
    // With default keys - confusing for time-series
    const annualChange = data.field5 - data.field12;
    const cumulativeER = data.field8 + annualChange;
    Artifacts Collection
    Excel Artifact Extractor

    GHGLK = Leakage emissions

    Document everything: Create lists and diagrams showing all relationships

    How it relates to other parameters
    Documentation framework for implementation support
  • VM0033 test artifact
    parsed methodology

    Understanding digital methodology validation and verification

  • Ensuring audit trails and compliance requirements are met

  • Managing methodology updates and version control

  • Development Tools: Text editor, browser, and API testing tools

    Recommended Tools

    • Postman or similar: For API testing and automation

    • Git: For version control and collaboration

    • Code Editor: VS Code or similar with JSON/JavaScript support

    Schema development, field management, testing checklist

    Part IV

    ✅ Available

    Chapters 13-17

    Policy workflow design, VM0033 implementation, production deployment

    Part V

    ✅ Available

    Chapters 18-21

    Calculation logic, customLogicBlock development, Guardian Tools, testing

    Part VI

    ✅ Available

    Chapters 22-23

    End-to-end testing, API integration, automated workflows

    Part VII

    🚧 In Progress

    Chapters 24-26

    User management, monitoring, maintenance procedures

    Part VIII

    ✅ Available

    Chapters 27-28

    External integration, troubleshooting, advanced best practices

    Part III: Schema Design and Development
    Part IV: Policy Workflow Design and Implementation
    Part V: Calculation Logic Implementation
    Part VI: Integration and Testing
    Part VIII: Advanced Topics and Best Practices
    Chapters 1-3
    Chapters 4-7
    Chapters 8-12

    Question: Text that users see as the field label

  • Allow Multiple Answers: Whether field accepts multiple values

  • Answer: Default value or example shown to users

  • Sub-schema references point to existing worksheets
  • Review and rename field keys for meaningful calculation code
  • Update the schema ID in relevant policy workflow block

  • biomass_density_stratum_i instead of field0

  • carbon_stock_baseline_t instead of carbonStockBaselineT

  • emission_reduction_total instead of emissionReductionTotal

  • Yes/No

    String/Number/Enum/etc

    Reference to enum

    TRUE/FALSE/hidden

    User-facing question

    Yes/No

    Default value

    PDD Schema Screenshot
    Project description tab Excel Screenshot
    alt text
    Guardian schema UI showing field key editing interface

    API testing framework - Automated testing integration

    Test Artifact Validation - Against methodology spreadsheet test cases

    Uncertainty Assessment: Monte Carlo simulation results

  • SOC (Soil Organic Carbon): Soil carbon stock changes

  • Execute Test - Run the calculation and examine results

    Compare Against Reference Implementation - Use our ER calculations as reference

    Comprehensive Tracking - Artifacts, transactions, and IPFS file monitoring

    Debug discrepancies using Guardian's built-in debugging tools

  • Automate regression testing using Cypress and Guardian APIs

  • AR Tool 14 Implementation - Production tool configuration

    Final PDD VC with net ERR data
    AR Tool 14 implementation
    VM0033 test spreadsheet
    VM0033 Test Spreadsheet
    Final PDD VC
    ER Calculations
    Final PDD VC
    API automation testing guide
    Guardian Dry-Run Mode Documentation
    CustomLogicBlock Testing Interface
    Guardian API Automation Testing
    Custom Logic Block Testing UI
    VC JSON INPUT
    Logs Tab UI
    Virtual User Management in Dry-Run Mode

    Guardian's dry-run mode creates a sandbox environment where you can simulate multiple users working simultaneously on different parts of your methodology. This approach mirrors production deployment while keeping testing fast and cost-effective.

    Setting Up Dry-Run Testing Environment:

    1. Import VM0033 Policy - Start with the VM0033 policy from shared artifacts

    2. Enable Dry-Run Mode - Switch policy status from Draft to Dry-Run

    3. Create Virtual Users - Set up users for each role (Project Proponent, VVB, OWNER)

    4. Execute Complete Workflows - Test full project lifecycle with role transitions

    Choose role during dry run
    Switch role UI
    VVB documents review UI for Registry role

    Creating Virtual Users for Multi-Role Testing

    Guardian allows Standard Registry users (OWNER role) to create virtual users for testing different stakeholder workflows. This feature enables testing approval chains and document handoffs. You can do so via API as well.

    Testing User Progression Pattern:

    1. Project Developer submits PDD using VM0033 project description schema

    2. Standard Registry reviews and lists project on their platform

    3. VVB accepts validation assignment from project proponent and conducts project review

    4. VVB submits validation report with project assessment

    5. Standard Registry approves or rejects project based on VVB validation

    6. Project Developer submits monitoring reports over crediting period

    7. VVB verifies monitoring data and submits verification reports

    8. Standard Registry issues VCU tokens based on verified emission reductions

    VM0033 Complete Workflow Testing

    Let's walk through testing VM0033's complete workflow using the navigation structure from the policy JSON. This demonstrates how dry-run testing validates stakeholder interactions across the full methodology implementation.

    Project Proponent Workflow Testing

    Step 1: Project Creation and PDD Submission

    The Project Proponent starts by accessing the "Projects" section and creating a new project using VM0033's PDD schema.

    New Project Form

    Testing should validate:

    • PDD form captures all required VM0033 parameters

    • Conditional schema sections display based on certification type (VCS vs VCS+CCB)

    • Calculation inputs integrate with custom logic blocks

    • Document submission creates proper audit trail

    VC document submitted

    Step 2: VVB Selection and Assignment

    After PDD submission and approval by registry, the project developer selects a VVB for validation. Testing confirms:

    • VVB selection interface displays approved VVB list

    • Assignment notification reaches selected VVB

    • Project status updates reflect VVB assignment

    • Document access permissions transfer correctly

    Project approval/rejection UI within SR role
    VVB selection via dropdown

    VVB Workflow Testing

    Step 3: Project Validation Process

    VVBs access assigned projects through their dedicated interface. Validation testing includes:

    • Project document review and download capabilities

    • Validation checklist and assessment tools

    • Site visit data collection and documentation

    • Validation report submission using VM0033 validation schema

    Project review UI
    Validation Report UI
    Validation Report Form

    Step 4: Monitoring Report Verification

    During the crediting period, VVBs verify monitoring reports:

    • Annual monitoring data review and validation

    • Field measurement verification against monitoring plan

    • Calculation accuracy assessment using VM0033 test artifacts

    • Verification report submission with emission reduction confirmation

    Validated & approved projects see monitoring report button
    Add report dialog
    Assigned to Earthood
    VVB can view the report submitted with auto-calculated values

    Standard Registry (OWNER) Workflow Testing

    Step 5: Project Pipeline Management

    Standard Registry manages the complete project pipeline:

    • Project listing approval after successful validation

    • VVB accreditation and performance monitoring

    • Monitoring report review and compliance tracking

    • Token issuance authorization based on verified reductions

    • A verified presentation of tokens minted, each mint must have a trace of all the steps and data backing it.

    Testing Workflow State Transitions

    Guardian policies manage complex state transitions across multiple documents and stakeholders. Effective testing validates these transitions handle edge cases and error conditions properly.

    Document Status Flow Testing:

    Potential Error Conditions:

    • VVB rejection scenarios and resubmission workflows

    • Incomplete document submission handling

    • Calculation errors and correction procedures

    • Role permission violations and access control

    • Concurrent user conflicts and resolution

    Integration Testing with Production-Scale Data

    Large Dataset Processing Validation

    VM0033 projects can involve hundreds of hectares with complex stratification requiring extensive monitoring data. Testing with realistic data volumes validates performance and accuracy under production conditions.

    Creating Test Datasets Based on VM0033 Allcot Case:

    Using the VM0033_Allcot_Test_Case_Artifact.xlsx as foundation, create expanded datasets:

    Multi-Year Monitoring Period Simulation

    VM0033 projects can operate over 100-year crediting periods with annual monitoring in best scenarios. Testing long-term scenarios validates data consistency and calculation accuracy across extended timeframes using data patterns from our VM0033 test case artifact.

    Testing should validate:

    • Calculation consistency across monitoring/crediting periods

    • Carbon stock accumulation tracking over decades

    • Emission reduction trend validation

    Cross-Component Integration Validation

    Schema-Workflow-Calculation Integration Testing

    Part VI testing validates that components from Parts III-V work together seamlessly. This integration testing catches issues that component testing misses.

    Schema Field Mapping Validation:

    Using VM0033's schema structure, test field key consistency:

    Important blocks for integration testing:

    Test document flow through complete policy execution:

    1. requestVcDocumentBlock captures schema data correctly

    2. customLogicBlock processes schema fields without errors

    3. sendToGuardianBlock stores calculated results properly

    4. mintTokenBlock uses calculation outputs for token quantities

    External Tool Integration

    VM0033 integrates AR-Tool14 and AR-Tool05 for biomass and soil carbon calculations. Make sure you validate that these tools work correctly within complete policy execution.

    Testing Best Practices and Procedures

    Incremental Testing Approach

    Start with simple workflows and progressively add complexity. This approach isolates issues and builds confidence in policy functionality.

    Testing Progression:

    1. Single User, Single Document - Basic PDD submission and processing

    2. Single User, Complete Project - Full project lifecycle for one user type

    3. Multi-User, Single Project - Role interactions and handoffs

    4. Multi-User, Multiple Projects - Concurrent operations and scaling

    5. Production Simulation - Full-scale testing with realistic data volumes

    Dry-Run Artifacts and Validation

    Guardian's dry-run mode creates artifacts that help validate testing results and provide audit trails for methodology compliance.

    Dry-Run Artifacts:

    • Transaction Log: Mock blockchain transactions that would occur in production

    • Document Archive: Complete document history with version tracking

    • IPFS Files: Files that would be stored in distributed storage

    • Token Operations: Credit issuance and transfer records

    • Audit Trail: Complete workflow execution history

    Menu bar showing artifacts tab

    Test Data Management and Version Control

    Maintain test datasets that evolve with your methodology. Version control ensures testing remains valid as policies change.

    Sample Test Data Organization:

    Each test case should include:

    • Input parameters matching your schema structure

    • Expected calculation results from methodology spreadsheets

    • Documentation explaining test scenario purpose

    • Success criteria and validation checkpoints

    Chapter Summary

    End-to-end testing validates that your methodology digitization works correctly under real-world conditions. Guardian's dry-run capabilities provide the foundation for this testing, enabling multi-role workflows, production-scale data processing, and component integration validation.

    Key Testing Strategies:

    Multi-Role Testing Framework:

    • Virtual user creation and management

    • Complete stakeholder workflow simulation

    • Role transition and permission testing

    • Document handoff validation

    Production-Scale Validation:

    • Large dataset processing performance

    • Multi-year monitoring period simulation

    • Concurrent user and project handling

    • Integration with external systems

    Cross-Component Integration:

    • Schema-workflow-calculation consistency

    • Field mapping and data flow validation

    • External tool integration testing

    • End-to-end document processing

    Testing Workflow:

    1. Setup dry-run environment with VM0033 policy configuration

    2. Create virtual users representing each methodology stakeholder

    3. Execute complete workflows following VM0033 navigation patterns

    4. Validate integration between schemas, workflows, and calculations

    5. Test production scenarios with realistic data volumes and timeframes

    6. Document results and maintain test case version control

    This testing approach ensures your methodology implementation handles the complexity and scale requirements of production carbon credit programs while maintaining accuracy and compliance with methodology requirements.


    Next Steps: Chapter 23 covers API integration and automation, building on the testing foundation established here to enable programmatic methodology operations and external system integration.

    1. Data Transformation for External Systems: Converting Guardian project data to external system formats

    2. External Data Reception: Accepting monitoring data from external devices and aggregating systems

    Use Case 1: Transforming Data for External Systems

    Introduction to dataTransformationAddon

    Guardian's dataTransformationAddon block enables transformation of Guardian project data into formats required by external registry systems. This block executes JavaScript transformation code that converts Guardian document structures into external API formats.

    Primary Applications:

    • Submitting project data to Verra Project Hub

    • Integrating with Gold Standard registry systems

    • Preparing data for CDM project submissions

    • Custom registry platform integration

    VM0033 DataTransformation Implementation

    The VM0033 policy demonstrates production-grade data transformation in the project-description block:

    Data transformation block in VM0033

    Transformation Code Structure

    The dataTransformationAddon block executes JavaScript code that transforms Guardian documents into any format needed. Here's the core transformation pattern from VM0033:

    Data Transformation Best Practices

    1. Field Mapping Strategy

    2. Data Type Conversions

    3. Complex Object Transformations

    Use Case 2: Receiving Data from External Systems

    External Data Reception Architecture

    Guardian's externalDataBlock enables reception of monitoring data from external devices, IoT sensors, and third-party MRV systems. This pattern can be used for automated monitoring reports and real-time project tracking. It is the approaches used in Gold standard's metered energy cooking policy implemented on Guardian.

    External MRV data integration flow in metered policy

    External Data Flow:

    1. Project validation triggers MRV configuration generation. Download config button binds to validated projects.

    2. Download MRV configuration file

    3. External devices/servers use the config to prepare a VC and send data to /external endpoint

    4. externalDataBlock processes and validates incoming data

    5. Data aggregates into monitoring reports with a frequency set in the timer block.

    MRV Configuration Download Pattern

    Guardian implements a download-based pattern for external data integration. When a project is validated, a comprehensive MRV configuration file becomes available for download:

    External Data Submission Endpoint

    Guardian exposes an /external endpoint for receiving data from external systems:

    Endpoint Structure:

    Authentication:

    Data Payload Format:

    ExternalDataBlock Implementation

    The externalDataBlock handles incoming external data with validation and processing:

    MRV Sender Integration

    Guardian includes an MRV sender tool that simulates external data submission. The source code is available here - https://github.com/hashgraph/guardian/tree/main/mrv-sender

    Key Configuration Elements:

    • URL: External endpoint (https://guardianservice.app/api/v1/external)

    • Hedera Integration: Account ID and private key for blockchain transactions

    • Schema Context: Complete JSON-LD schema definition with field types

    • DID Documents: Verification methods and authentication keys

    • Policy References: Policy ID, tag, and document reference for linking

    Data Generation Options:

    • Values Mode: Use specific values for each field

    • Templates Mode: Use predefined data templates

    • Random Mode: Generate random values within specified ranges

    Chapter Summary

    This chapter demonstrated Guardian's bidirectional integration capabilities through two essential patterns:

    Data Transformation for External Systems using dataTransformationAddon blocks enables Guardian to export project data in formats required by external registries. The VM0033 implementation shows production-grade JavaScript transformation code that converts Guardian documents into external system formats.

    External Data Reception using externalDataBlock and MRV configurations enables automated monitoring data collection from external devices and systems. The metered energy policy pattern demonstrates how projects generate downloadable MRV configuration files that external systems use to submit data back to Guardian.

    Key Implementation Elements:

    • JavaScript-based data transformation within Guardian policy blocks

    • Comprehensive MRV configuration files with schema definitions and DID documents

    • Hedera blockchain integration for secure data transactions

    • Schema validation and document verification for incoming data

    • Timer-based aggregation for monitoring report generation

    These integration patterns enable Guardian to function as a comprehensive platform in environmental certification ecosystems, supporting both automated data collection and seamless registry integration.

    Next Steps: Chapter 28 will explore advanced Guardian features including multi-methodology support, AI-powered search capabilities, and future platform developments.


    Artifacts and References

    Related Documentation

    • External Data Workflow Block

    • Custom Logic Block

    • VM0033 Policy JSON

    Code Examples

    • dataTransformationAddon Configuration

    • External Data Submission Format

    • MRV Configuration Structure

    Chapter 23: API Integration and Automation

    Automating methodology operations and integrating with external systems using Guardian's REST API framework

    Chapter 22 covered manual testing workflows. Chapter 23 shows you how to automate these processes using Guardian's comprehensive API framework. Using the same VM0033 patterns, you'll learn to automate data submission, integrate with monitoring systems, and build testing frameworks that scale.

    Guardian's APIs enable programmatic access to all functionality available through the UI. This automation capability transforms methodology operations from manual processes into scalable, integrated systems that connect with existing organizational infrastructure.

    Guardian API Framework Overview

    Authentication and API Access

    Guardian uses JWT-based authentication for API access. All API calls require authentication headers except for initial login and registration endpoints.

    Access Token API:

    Refresh token is available in response of login(or loginByEmail) endpoints

    Base API URL Pattern: All Guardian APIs follow the pattern: https://guardianservice.app/api/v1/. If you're using local setup - host would update to http://localhost:3000 depending on your port configuration.

    For dry-run operations, the typical URL structure is:

    • Policy blocks: /api/v1/policies/{policyId}/blocks/{blockId}

    • Dry-run operations: /api/v1/policies/{policyId}/dry-run/

    VM0033 Policy API Structure

    Submitting data via APIs is much faster than manual form filling if schema is too big. Using the we analyzed, here's how API endpoints map to actual policy blocks:

    VM0033 Key Block IDs from Policy JSON:

    • PDD Submission Block: 55df4f18-d3e5-4b93-af87-703a52c704d6 - UUID of add_project_bnt

    • Monitoring Report Block: 53caa366-4c21-46ff-b16d-f95a850f7c7c - UUID of add_report_bnt

    For every dry run triggered, these IDs change so make sure you have the latest ones.

    API Endpoint Construction:

    Dry-Run API Operations

    Virtual User Management for API Testing

    Guardian's dry-run APIs enable automated testing with virtual users, simulating multi-stakeholder workflows programmatically.

    Creating and Managing Virtual Users:

    Automated Workflow Execution

    Using dry-run APIs, you can execute complete VM0033 workflows programmatically to validate methodology implementation.

    Complete VM0033 Workflow Automation:

    Automated Testing Frameworks

    Cypress Testing Integration

    Building on Guardian's API patterns, you could create automated testing suites that validate methodology implementation across multiple scenarios.

    VM0033 Cypress Test Suite(Sample):

    Chapter Summary

    API integration transforms Guardian methodology implementations from manual processes into automated, scalable systems. Using VM0033's patterns, you can automate data submission, integrate with external monitoring systems, build comprehensive testing frameworks, and manage production operations efficiently.

    Key API Integration Patterns:

    Automated Data Submission:

    • PDD and monitoring report API automation using requestVcDocumentBlock endpoints

    • Multi-year monitoring data generation and submission workflows

    • Error handling and validation for automated submissions

    Dry-Run API Operations:

    • Virtual user creation and management for multi-stakeholder testing

    • Programmatic workflow execution and validation

    • Artifact collection and analysis for testing validation

    External System Integration:

    • IoT sensor data transformation and submission to Guardian monitoring workflows

    • Registry integration with automated project listing and status synchronization

    • Real-time data pipeline integration for continuous monitoring operations

    Production API Management:

    • Rate limiting and retry logic for robust production operations

    • Performance testing and load validation for production scalability

    • Error handling and monitoring for long-term operational reliability

    Implementation Workflow:

    1. Establish API authentication and access token management

    2. Map policy block IDs to API endpoints using policy JSON structure

    3. Build automation scripts for data submission and workflow execution

    4. Create testing frameworks using Cypress and Guardian's dry-run APIs

    API integration enables methodology implementations that scale from prototype testing to production operations, supporting hundreds of projects and thousands of stakeholders while maintaining accuracy and compliance with methodology requirements.


    Next Steps: This completes Part VI: Integration and Testing. Your methodology implementation is now ready for production deployment with comprehensive testing coverage and scalable API automation capabilities.

    Chapter 1: Introduction to Methodology Digitization

    Methodology digitization transforms how environmental certification actually works in carbon markets. Instead of manual processes where projects spend months navigating paper-based workflows, digitization creates automated, blockchain-verified systems that can handle the complexity of modern carbon methodologies while maintaining the rigor these markets require.

    This isn't just about converting PDFs to digital forms. We're talking about recreating entire certification processes - from project registration through credit issuance - as executable digital policies where methodology requirements like VM0033 become part of streamlined, transparent workflows.

    What You'll Learn: Core concepts for methodology digitization using VM0033 as a working example. You'll understand why digitization is becoming essential and how the Guardian platform makes complex methodology implementation practical.

    Row 1: Project Description (Auto)
    Row 2: Description
    Row 3: Schema Type | Verifiable Credentials
    Row 4: Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
    Row 5: Yes | Enum | Choose project certific (enum) | | Choose project certification type | No | VCS v4.4
    Schema name | Project Description (Auto)
    Field name | Choose project certification type
    Loaded to IPFS | No
    VCS v4.4 |
    CCB v3.0 & VCS v4.4 |
    Row 6: No | VCS Project Description v4.4 | | TRUE | VCS Project Description | No |
    Row 7: No | CCB | | FALSE | CCB & VCS Project Description | No |
    VCS Project Description v4.4
    Description
    Schema Type | Sub-Schema
    Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
    Yes | String | | | Project title | No | example
    Yes | String | | | Project ID | No | example
    Yes | URL | | | Project Website | No | https://example.com
    Yes | Date | | | Start Date | No | 2000-01-01
    Yes | Date | | | End Date | No | 2000-01-01
    CCB
    Description
    Schema Type | Sub-Schema
    Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
    Yes | String | | | CCB Standard | No | example
    Yes | String | | | CCB Project Type | No | example
    Yes | Date | | | Auditor Site Visit Start Date | No | 2000-01-01
    Yes | Number | | | Latitude (Decimal Degrees) | No | 1
    Yes | Number | | | Longitude (Decimal Degrees) | No | 1
    Yes | Number | | | Acres/Hectares | No | 1
    Yes | Enum | AcresHectares (enum) | | Acres/Hectares | No | Acres
    Schema name | Project Description (Auto)
    Field name | Acres/Hectares
    Loaded to IPFS | No
    Acres |
    Hectares |
    Yes | Date | | | Project Start Date | No | 2000-01-01
    Yes | Date | | | Project End Date | No | 2000-01-01
    Yes | Number | | | Crediting Period Length (years) | No | 10
    Yes | String | | | Stratum number | No | example
    Yes | Number | | | Area of stratum (ha) – Ai,t | No | 1
    Yes | Number | | | Biomass density (t d.m. ha-1) | No | 1
    Yes | String | | | Data source for biomass density | No | example
    Yes | String | | | Justification for parameter selection | No | example
    Yes | Enum | Which method did you us (enum) | | Which method did you use for estimating change in carbon stock in trees? | No | Between two points of time
    Schema name | Project Description (Auto)
    Field name | Which method did you use for estimating change in carbon stock in trees?
    Loaded to IPFS | No
    Between two points of time |
    Difference of two independent stock estimations |
    No | Number | | FALSE | Mean annual change in carbon stock (t CO2e yr-1) | No | 1
    No | Number | | FALSE | Carbon fraction of tree biomass (CF_TREE) | No | 1
    No | Number | | FALSE | Default mean annual increment (Δb_FOREST) | No | 1
    Yes | AR Tool 14 | | | AR Tool 14 | No |
    AR Tool 14
    Description | Biomass estimation using AR Tool 14
    Schema Type | Tool-integration
    Tool | AR Tool 14
    Tool Id | [tool message id if available]
    Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
    Yes | Number | | | Tree height (m) | No | 1
    Yes | Number | | | Diameter at breast height (cm) | No | 1
    Yes | Number | | | Wood density (g cm-3) | No | 1
    Yes | (New) Final Baseline Emissions | | | Baseline Emissions | No |
    (New) Final Baseline Emissions
    Description
    Schema Type | Sub-Schema
    Required Field | Field Type | Parameter | Visibility | Question | Allow Multiple Answers | Answer
    Yes | Number | | | Year t | No | 1
    Yes | String | | | Stratum number | No | example
    Yes | Enum | It's a baseline scenari (enum) | | It's a baseline scenario or project scenario? | No | Baseline scenario
    Yes | Number | | | Mean annual change in carbon stock in trees (t CO2e yr-1) | No | 1
    Yes | (New) Project Emissions | | | Project Emissions | No |
    No | Auto-Calculate | | | Total Emission Reductions (t CO2e) | No | 2
    Yes | Image | | | Site photograph | No | ipfs://05566a658a44c6f747b5f82a2de1e0bf
    No | String | | | Document description | No | example
    No | Help Text | {"color":"#FF0000","size":"14px"} | | Parameter Help | No | This parameter represents...
    No | String | | Hidden | Internal project ID | No | example
    Yes | String | | | Project Developer Name | No | example
    Yes | Pattern | [0-9]{4} | | Four-digit year | No | 2024
    // With good field keys
    const totalEmissions = data.biomass_density_stratum_i * data.area_hectares;
    
    // With default keys
    const totalEmissions = data.field0 * data.field1; // What do these represent?
    // Structure from final-PDD-vc.json artifact
    const vm0033TestData = {
        "document": {
            "credentialSubject": [{
                // Complete VM0033 test case data including:
                // - Baseline emissions calculations
                // - Project emissions calculations
                // - Leakage calculations
                // - Final net emission reduction results
                // - All intermediate calculation values
            }]
        }
    };
    // Example debugging in customLogicBlock
    function calculateBaseline(document) {
        const baseline = document.baseline_scenario;
    
        // Calculate fire emissions
        const fireEmissions = baseline.area_data.baseline_fire_area *
                             baseline.emission_factors.fire_emission_factor;
        debug("Fire Emissions Calculation", {
            area: baseline.area_data.baseline_fire_area,
            factor: baseline.emission_factors.fire_emission_factor,
            result: fireEmissions
        });
    
        // Calculate total baseline emissions
        const totalBaseline = fireEmissions + /* other calculations */;
        debug("Total Baseline Emissions", totalBaseline);
    
        return totalBaseline;
    }
    # From /e2e-tests folder
    npm install cypress --save-dev
    
    # Configure authorization in cypress.env.json
    {
        "authorization": "your_access_token_here"
    }
    
    # Run specific methodology tests
    npx cypress run --spec "tests/vm0033-methodology.cy.js"
    # Start dry-run mode
    PUT /api/v1/policies/{policyId}/dry-run
    
    # Create virtual user
    POST /api/v1/policies/{policyId}/dry-run/user
    
    # Execute block dry-run
    POST /api/v1/policies/{policyId}/dry-run/block
    
    # Get transaction history
    GET /api/v1/policies/{policyId}/dry-run/transactions
    
    # Get artifacts
    GET /api/v1/policies/{policyId}/dry-run/artifacts
    
    # Restart policy execution
    POST /api/v1/policies/{policyId}/dry-run/restart
    PDD: Draft → Submitted → Under Review → Validated → Approved
    Monitoring Report: Draft → Submitted → Under Verification → Verified → Credits Issued
    VVB Status: Applicant → Under Review → Approved → Active → Suspended/Revoked
    // Generate multiple project instances for load testing
    function generateTestProjects(baseProject, count) {
      const testProjects = [];
      for (let i = 0; i < count; i++) {
        const project = JSON.parse(JSON.stringify(baseProject));
        project.project_details.G5 = `Test Project ${i + 1}`;
        project.baseline_scenario.area_data.total_project_area = 100 + (i * 50);
        testProjects.push(project);
      }
      return testProjects;
    }
    
    // Test concurrent project submissions
    const multipleProjects = generateTestProjects(vm0033BaseProject, 25);
    // Verify schema field keys match calculation block references
    const pddSchema = vm0033Policy.schemas.find(s => s.name === "PDD Schema");
    const calculationBlock = vm0033Policy.blocks.find(b => b.tag === "er_calculations");
    
    // Test field key mapping
    function validateFieldMapping(schema, calculationCode) {
      const schemaFields = extractFieldKeys(schema);
      const calculationReferences = extractFieldReferences(calculationCode);
    
      const unmappedFields = calculationReferences.filter(
        ref => !schemaFields.includes(ref)
      );
    
      if (unmappedFields.length > 0) {
        console.error("Unmapped calculation references:", unmappedFields);
        return false;
      }
      return true;
    }
    test-data/
    ├── vm0033-base-cases/
    │   ├── simple-project.json
    │   ├── complex-stratified-project.json
    │   └── multi-year-monitoring.json
    ├── edge-cases/
    │   ├── zero-emissions.json
    │   ├── maximum-parameters.json
    │   └── error-conditions.json
    └── performance/
        ├── large-dataset.json
        ├── concurrent-users.json
        └── long-term-simulation.json
    {
      "id": "819d94e8-7d1d-43c1-a228-9b6fa1982e3f",
      "blockType": "dataTransformationAddon",
      "defaultActive": false,
      "permissions": ["Project_Proponent"],
      "onErrorAction": "no-action",
      "uiMetaData": {},
      "expression": "(function calc() { /* transformation code */ })"
    }
    (function calc() {
      const jsons = [];
      if (documents && documents.length > 0) {
        documents.forEach((doc) => {
          const document = doc.document;
    
          // Build external registry format
          const json = {
            id: '',
            projectNumber: null,
            accountId: '',
            standardTemplate: '',
            standardTemplateName: '',
            methodologyTemplateTitle: '',
            methodologyTemplate: '',
            projectName: '',
            projectDescription: '',
            website: null,
            projectSubmissionStatus: 'Draft',
            fetchProjectBoundaryFromCalculationInput: false,
            estimatedProjectStartDate: '',
            creditPeriod: {
              startDate: '',
              endDate: '',
            },
            projectSize: null,
            averageAnnualVolume: null,
            // ... complete transformation structure
          };
    
          // Map Guardian fields to external format
          if (document.credentialSubject && document.credentialSubject.length > 0) {
            const credentialSubject = document.credentialSubject[0];
    
            // Direct field mapping
            json.projectName = credentialSubject.projectTitle || '';
            json.projectDescription = credentialSubject.projectObjective || '';
            json.website = credentialSubject.projectWebsite || null;
    
            // Complex nested mappings
            if (credentialSubject.creditingPeriod) {
              json.creditPeriod.startDate = credentialSubject.creditingPeriod.startDate || '';
              json.creditPeriod.endDate = credentialSubject.creditingPeriod.endDate || '';
            }
    
            // Conditional transformations
            if (credentialSubject.projectBoundary) {
              json.fetchProjectBoundaryFromCalculationInput = true;
              json.calculationInputs = {
                projectBoundaryProject: credentialSubject.projectBoundary.project || [],
                projectBoundaryBaseline: credentialSubject.projectBoundary.baseline || []
              };
            }
          }
    
          jsons.push(json);
        });
      }
      return jsons;
    })()
    // Use defensive programming for nested objects
    const projectData = credentialSubject?.projectDetails?.data || {};
    const creditPeriod = credentialSubject?.creditingPeriod || {};
    
    // Provide defaults for required external fields
    json.projectSubmissionStatus = credentialSubject.status || 'Draft';
    json.projectType = credentialSubject.projectType || '14'; // Default for project hub
    // Date format transformations
    json.estimatedProjectStartDate = credentialSubject.startDate ?
      new Date(credentialSubject.startDate).toISOString().split('T')[0] : '';
    
    // Numeric conversions
    json.projectSize = credentialSubject.projectArea ?
      parseFloat(credentialSubject.projectArea) : null;
    // VCS-specific transformations
    if (credentialSubject.vcsDetails) {
      json.vcs = {
        afoluActivities: credentialSubject.vcsDetails.activities || [],
        projectValidatorId: credentialSubject.assignedVVB || null,
        additionalProjectTypes: credentialSubject.vcsDetails.additionalTypes || [],
        earlyAction: credentialSubject.vcsDetails.earlyAction || null
      };
    }
    
    // Location data transformation
    if (credentialSubject.projectLocations) {
      json.locations = credentialSubject.projectLocations.map(loc => ({
        country: loc.country,
        region: loc.region,
        coordinates: {
          latitude: loc.lat,
          longitude: loc.lng
        }
      }));
    }
    {
      "url": "https://guardianservice.app/api/v1/external",
      "topic": "0.0.6365927",
      "hederaAccountId": "0.0.1752750328257",
      "hederaAccountKey": "302e020100300506032b6570042204205ee9abd705b66b67ebd324c717df5c66551e1be5f02f1746585389683b38970e",
      "installer": "did:hedera:testnet:EL6mjqKzu6W4fcuSbXXY9Z3GrdWLrqFSZKM5r6wmXVyv_0.0.4967862",
      "did": "did:hedera:testnet:8XJydX2sCfLL55CsdZqW3qE4n5TT1MGYQuDgz8ZNbawG_0.0.1752750228712",
      "type": "4b1f8509-0c5e-4165-b779-a440507abe42",
      "schema": {
        "@context": {
          "@version": 1.1,
          "@vocab": "https://w3id.org/traceability/#undefinedTerm",
          "id": "@id",
          "type": "@type",
          "f9008ddc-f1f6-476c-8eb8-41ac7d05985c": {
            "@id": "schema:f9008ddc-f1f6-476c-8eb8-41ac7d05985c#f9008ddc-f1f6-476c-8eb8-41ac7d05985c",
            "@context": {
              "device_id": {"@type": "https://www.schema.org/text"},
              "policyId": {"@type": "https://www.schema.org/text"},
              "ref": {"@type": "https://www.schema.org/text"},
              "date_from": {"@type": "https://www.schema.org/text"},
              "date_to": {"@type": "https://www.schema.org/text"},
              "eg_p_d_y": {"@type": "https://www.schema.org/text"}
            }
          }
        }
      },
      "context": {
        "type": "4b1f8509-0c5e-4165-b779-a440507abe42",
        "@context": ["schema:4b1f8509-0c5e-4165-b779-a440507abe42"]
      },
      "didDocument": {
        "id": "did:hedera:testnet:8XJydX2sCfLL55CsdZqW3qE4n5TT1MGYQuDgz8ZNbawG_0.0.1752750228712",
        "@context": "https://www.w3.org/ns/did/v1",
        "verificationMethod": [
          {
            "id": "did:hedera:testnet:8XJydX2sCfLL55CsdZqW3qE4n5TT1MGYQuDgz8ZNbawG_0.0.1752750228712#did-root-key",
            "type": "Ed25519VerificationKey2018",
            "controller": "did:hedera:testnet:8XJydX2sCfLL55CsdZqW3qE4n5TT1MGYQuDgz8ZNbawG_0.0.1752750228712",
            "publicKeyBase58": "99mGcpmbJaqUeMP5d6xYuUazcsNz8HHGRE3Rq1uoAFwm"
          }
        ]
      },
      "policyId": "6878b442dc6c9d1d13744cf8",
      "policyTag": "Tag_1752740851403",
      "ref": "did:hedera:testnet:8XJydX2sCfLL55CsdZqW3qE4n5TT1MGYQuDgz8ZNbawG_0.0.1752750228712"
    }
    POST /api/v1/policies/{policyId}/blocks/{blockId}/external
    // Bearer token authentication
    Authorization: Bearer <project_token>
    {
      "document": {
        "@context": ["https://www.w3.org/2018/credentials/v1"],
        "type": ["VerifiableCredential"],
        "credentialSubject": {
          "type": "MonitoringData",
          "id": "did:hedera:testnet:...",
          "accountId": "0.0.123456",
          "amount": 1250.5,
          "date": "2024-09-15",
          "period": "2024-Q3",
          "deviceId": "SENSOR_001",
          "location": {
            "lat": 37.7749,
            "lng": -122.4194
          },
          "measurements": {
            "co2Flux": 12.5,
            "soilMoisture": 0.85,
            "temperature": 22.3
          }
        }
      },
      "ref": "project_document_reference"
    }
    @ActionCallback({
      output: [
        PolicyOutputEventType.RunEvent,
        PolicyOutputEventType.RefreshEvent,
        PolicyOutputEventType.ErrorEvent
      ]
    })
    async receiveData(data: IPolicyDocument) {
      const ref = PolicyComponentsUtils.GetBlockRef<AnyBlockType>(this);
    
      // Verify document signature and schema
      let verify: boolean;
      try {
        const VCHelper = new VcHelper();
        const res = await VCHelper.verifySchema(data.document);
        verify = res.ok;
        if (verify) {
          verify = await VCHelper.verifyVC(data.document);
        }
      } catch (error) {
        ref.error(`Verify VC: ${PolicyUtils.getErrorMessage(error)}`);
        verify = false;
      }
    
      // Get document owner and validate relationships
      const user: PolicyUser = await PolicyUtils.getDocumentOwner(ref, data, null);
      const documentRef = await this.getRelationships(ref, data.ref);
      const schema = await this.getSchema();
    
      // Create and validate document
      let doc = PolicyUtils.createVC(ref, user, VcDocument.fromJsonTree(data.document));
      doc.type = ref.options.entityType;
      doc.schema = ref.options.schema;
      doc.signature = verify ? DocumentSignature.VERIFIED : DocumentSignature.INVALID;
      doc = PolicyUtils.setDocumentRef(doc, documentRef);
    
      // Validate using child validator blocks
      const state: IPolicyEventState = { data: doc };
      const error = await this.validateDocuments(user, state);
      if (error) {
        throw new BlockActionError(error, ref.blockType, ref.uuid);
      }
    
      // Trigger workflow events
      ref.triggerEvents(PolicyOutputEventType.RunEvent, user, state);
      ref.triggerEvents(PolicyOutputEventType.RefreshEvent, user, state);
    }

    Integrate external systems through data transformation and API orchestration

  • Deploy production monitoring with error handling and performance optimization

  • VM0033 policy JSON
    Add Project Button JSON config
    Authorization header can be extracted via dev tools console
    What is Methodology Digitization?

    The Challenge: Carbon markets still rely heavily on manual processes. Project developers submit PDFs, validators review paper documents, and registries track everything through email chains and spreadsheets. This works, but it's slow, error-prone, and difficult to verify.

    Our Approach: Instead of digitizing documents, we digitize entire certification processes. We transform workflows themselves into automated, blockchain-verified systems where methodology requirements are embedded directly into the certification process. Every step becomes traceable, calculations are automated, and stakeholders can work within a single platform rather than juggling multiple systems.

    Technical Benefits:

    • Automated validation: Built-in validation eliminates manual calculation errors and ensures methodology compliance

    • Immutable transparency: Every transaction and decision recorded on Hedera Hashgraph for complete audit trails

    • Process efficiency: Certification workflows accelerated from weeks to hours through automation

    • Systematic accuracy: Embedded validation logic prevents implementation mistakes that occur in manual processes

    Implementation Approach:

    1. Systematic analysis of certification workflows and stakeholder interactions across the complete process

    2. Technical mapping of roles, data flows, and decision points within certification frameworks

    3. Integration design where methodology requirements (like VM0033) are embedded into automated certification workflows

    4. Policy implementation as executable digital workflows that maintain methodology precision while automating processes

    5. Validation framework ensuring both methodology integrity and certification standard compliance

    VM0033 Example: The Digital Policy for Tidal Wetland and Seagrass Restoration demonstrates how digitization transforms entire certification processes:

    • Scope: Complete blue carbon project certification from registration to credit issuance

    • Stakeholders: Full ecosystem including Project Developers, VVBs, Registry Operators, and communities

    • Embedded Methodology: VM0033's specific requirements for soil carbon accounting and monitoring integrated into broader certification workflows

    • Process Automation: Manual certification steps (document review, calculation verification, stakeholder coordination) converted to automated digital workflows

    • Result: Complete digital certification process where VM0033 methodology requirements are embedded within automated policy workflows

    Production Impact: VM0033 digitization resulted in the first fully automated blue carbon project certification workflow in production use on Verra's platform.

    Why VM0033 Works as Our Reference:

    • Market significance: Leading methodology in the rapidly expanding blue carbon sector

    • Technical complexity: 130-page methodology with sophisticated calculation requirements ideal for demonstrating digitization capabilities

    • Real-world validation: Currently in production use, proving the digitization approach works at scale

    • Comprehensive scope: Global applicability across diverse coastal restoration contexts provides robust testing ground

    Guardian Platform Overview

    Guardian is a production-ready platform for environmental asset tokenization and certification workflow digitization, built on Hedera Hashgraph's distributed ledger technology. The platform is designed to handle the complexity requirements of real environmental methodologies while maintaining the performance and reliability needed for carbon market operations.

    Technical Architecture:

    • Policy Workflow Engine (PWE): Configurable workflow system that adapts to any environmental methodology's specific requirements

    • Microservices Design: Distributed architecture with dedicated services for authentication, policy execution, calculation processing, and data management

    • Hedera Hashgraph Integration: Immutable transaction recording and consensus mechanisms for audit trail integrity

    • IPFS Document Management: Decentralized storage ensuring supporting documentation remains accessible throughout project lifecycles

    Platform Capabilities:

    • Multi-stakeholder Coordination: Role-based access control accommodating complex stakeholder ecosystems (developers, validators, registries, communities)

    • Automated Calculation Engine: Processes complex environmental calculations with built-in validation logic to ensure accuracy

    • Standards Agnostic Design: Architecture supports VCS, CDM, Gold Standard, and custom methodology implementations

    • End-to-End Audit Trails: Complete immutable record of all actions from initial data collection through final token issuance

    Technical Foundation:

    • Microservices Architecture: Dedicated services for authentication, policy execution, data management, blockchain integration

    • Stakeholder Management: Project developers, VVBs, and registry operators work within single integrated platform

    • Immutable Records: All transactions and data modifications recorded on Hedera blockchain

    • Document Preservation: IPFS ensures supporting documentation remains accessible throughout project lifecycle

    See Guardian architecture for detailed technical specifications and the Artifacts Collection for working examples and validation tools.

    The VM0033 Case Study

    VM0033 (Methodology for Tidal Wetland and Seagrass Restoration) serves as the ideal digitization case study due to its comprehensive complexity and ongoing real-world production use by Verra.

    Methodology Scope and Complexity

    Ecosystem Coverage:

    • Tidal Forests: Mangroves and other woody vegetation under tidal influence

    • Tidal Marshes: Emergent herbaceous vegetation in intertidal zones

    • Seagrass Meadows: Submerged aquatic vegetation in shallow coastal waters

    Restoration Activities:

    • Hydrological management (tidal flow, connectivity, barriers)

    • Sediment supply (beneficial use of dredge material, diversions)

    • Salinity management (freshwater inputs, tidal exchange)

    • Water quality improvement (nutrient reduction, flushing)

    • Vegetation management (native species, invasive control)

    Technical Complexity (130-page methodology):

    • Carbon Pools: Above-ground biomass, below-ground biomass, dead wood, litter, soil organic carbon

    • GHG Sources: CO₂, CH₄, and N₂O with specific procedures for each

    • Emission Reduction & Removals: Through biomass accumulation, soil carbon increases, reduced methane/nitrous oxide emissions, avoided soil carbon loss

    Stakeholder Ecosystem and Workflow Complexity

    Key Stakeholders:

    • Project Developers: Implement restoration activities, collect monitoring data

    • VVBs: Conduct independent assessments of project performance

    • Registry Operators: Oversee process from registration to credit issuance

    • Local Communities: Provide traditional knowledge, participate in activities

    • Technical Experts: Wetland ecology, hydrology, soil science, carbon accounting

    Workflow Complexity:

    • Decision Trees: Multiple conditional logic paths based on project characteristics

    • Baseline Scenarios: Evaluation of multiple potential scenarios with specific selection criteria

    • Variable Monitoring: Requirements vary by project activities, ecosystem types, carbon pools

    • Role-Based Access: Sophisticated user management and workflow coordination required

    Roles Available for VM0033

    Calculation Methodology and Technical Requirements

    Carbon Accounting Approaches:

    • Soil Organic Carbon: Total stock approach or stock loss approach based on project characteristics

    • Key Variables: Peat Depletion Time (PDT) for organic soils, Soil Organic Carbon Depletion Time (SDT) for mineral soils

    • Biomass Calculations: CDM tool AR-Tool14 for trees/shrubs, specialized methods for herbaceous vegetation

    • Sea Level Rise: Integration of climate projection data for subsidence and biomass loss

    Calculation Complexity:

    • Multiple Pathways: CH₄ and N₂O estimated via proxies, modeling, default factors, or local values

    • Long-term Projections: 100-year data requirements for permanence and climate impacts

    • Geographic Boundaries: Dynamic boundaries affected by sea level rise over time

    • Uncertainty Analysis: Sophisticated error propagation across multiple variables

    A calculation code sample

    Guardian Implementation Patterns

    Modular Architecture Benefits:

    • Reusable Tools: CDM tools - AR-Tool05, AR-Tool14, AFLOU Non permanence risk implemented as Guardian tools

    • Cross-Methodology Sharing: Tools can be shared across multiple methodologies

    • Strata Management: Sophisticated data organization for strata-level calculations

    • Data Integrity: Schema system maintains validation requirements and data structures

    Real-World Production Use

    ABC Mangrove Project:

    • First Digital Project: Allcot's project represents first truly digital project listed on Verra Project Hub via Guardian

    • Complete Workflow: Supports end-to-end process from project design to carbon credit issuance

    • Compliance Maintained: Full adherence to VM0033's scientific and regulatory requirements

    • Process Streamlining: Digital implementation reduces development time while improving accuracy

    Allcot ABC Mangrove Project

    About Blue Carbon Projects

    Market Impact:

    • Critical Climate Tool: Incentivizes restoration and conservation of coastal ecosystems under increasing pressure

    • Global Applicability: Supports projects worldwide from Southeast Asian mangroves to Mediterranean seagrass

    • High Carbon Storage: Coastal ecosystems store carbon at rates up to 10x higher than terrestrial forests

    • Climate Goals: Essential for achieving global climate mitigation targets

    Guardian Platform Benefits:

    • Market Transparency: Complete project histories and verification records accessible to investors/buyers

    • Accountability: Blockchain-based immutable record keeping builds market confidence

    • Environmental Integrity: Detailed carbon accounting ensures credit quality and market trust

    Benefits and Challenges of Methodology Digitization

    Key Benefits

    Transparency & Trust:

    • Every action, calculation, and decision recorded immutably on blockchain

    • Unprecedented visibility into carbon credit generation process

    • Addresses long-standing concerns about environmental asset integrity

    Efficiency Gains:

    • Time Reduction: Manual processes from weeks/months to hours/days

    • Automated Validation: Immediate flagging of inconsistencies or missing information

    • Cost Reduction: Lower costs for all stakeholders through process automation

    • User Experience: Streamlined workflows improve overall experience

    Advanced Automation:

    • Complex Calculations: Automatic soil organic carbon calculations from monitoring data

    • Emission Factors: Automatic application of appropriate factors

    • Report Generation: Automated verification reports with methodology compliance

    • Workflow Management: End-to-end process automation

    Data Quality:

    • Built-in Validation: Automatic enforcement of data quality requirements

    • Standardized Formats: Consistent data structures across projects

    • Error Reduction: Automated validation reduces human errors

    • Reliability: Improved environmental asset calculation accuracy

    See Guardian's schema system for data validation details.

    Real-World Digitization Challenges

    Scale and Complexity:

    • Parameter Management: Hundreds of parameters across multiple strata

    • Long-term Projections: 100-year data requirements for permanence calculations

    • Ecological Zones: Numerous variables with specific calculation and validation rules

    • Schema Design: Substantial complexity in data structure management

    Complexity Reality Check: VM0033 requires managing hundreds of parameters across multiple strata, with some calculations requiring 100-year data projections. This scale requires systematic approaches and robust data management strategies.

    Technical Implementation:

    • External Dependencies: Multiple CDM tools (AR-Tool02, AR-Tool05, AR-Tool14) requiring integration

    • Scientific Translation: Converting complex calculations to executable code while maintaining accuracy

    • Data Integration: Multiple sources (satellite imagery, field measurements) with diverse formats

    • Regulatory Compliance: Ensuring digital implementation meets all methodology requirements

    Organizational Challenges:

    • Stakeholder Adoption: Environmental professionals transitioning from PDF-based workflows

    • Training Requirements: Support needed for effective use of digitized systems

    • Change Management: Moving from familiar processes to tech driven policy engines

    • Ongoing Support: Continuous assistance required for successful adoption

    Systematic Solutions and Best Practices

    Guardian's Solution Framework:

    Modular Architecture:

    • Reusable Components: Common calculation tools developed once, used across methodologies

    • Flexible Implementation: Policy Workflow Engine maintains scientific accuracy and regulatory compliance

    • Scalable Design: Handles complex methodologies while supporting future expansion

    Data Management Solutions:

    • Reliable Storage: IPFS integration for document storage, Hedera Hashgraph for immutable records

    • Long-term Permanence: Combination provides reliability needed for environmental asset management

    • Data Integrity: Ensures accessibility and integrity over project lifetimes

    Integration Capabilities:

    • API Framework: Comprehensive integration with existing systems and data sources

    • Migration Support: Reduces burden of transitioning from legacy systems

    • Infrastructure Leverage: Organizations can build on existing monitoring and verification investments

    Regulatory Compliance:

    • Standards Collaboration: Close partnership with standards bodies (e.g., Verra for VM0033)

    • Continuous Validation: Ongoing verification against original methodology requirements

    • Proven Implementation: VM0033 production deployment demonstrates compliance capability

    Key Success Factors:

    • Systematic Approach: Methodology digitization requires comprehensive planning, not just technical implementation

    • Stakeholder Engagement: Active involvement of all participants throughout process

    • Ongoing Refinement: Continuous improvement based on real-world experience and feedback

    Development Environment Setup

    Guardian offers two deployment options for accessing the platform's methodology digitization capabilities.

    Deployment Options

    Managed Guardian Service (MGS) - Recommended for Getting Started:

    • Benefits: No infrastructure management, immediate access, automatic updates, professional support

    • Ideal For: Organizations beginning methodology digitization journey

    • Access: Get started via Quick Start MGS docs

    Self-Hosted Installation - For Advanced Users:

    • Benefits: Complete control, customization capabilities, infrastructure integration, data sovereignty

    • Requirements: Docker/Docker Compose, Node.js, Hedera credentials, sufficient server resources

    • Guide: Guardian installation instructions

    Essential Development Tools

    Core Requirements:

    • Modern web browser like Chrome, Firefox for Guardian interface

    • API testing tools (like Postman) for integration development

    • Text editor with JSON support for policy/schema development

    • Git version control for collaboration

    Recommended Setup:

    • VS Code with your favorite extension

    • Docker Desktop for local development

    • Hedera testnet account for testing

    • IPFS node(ex Filebase) for document storage testing

    Key Setup Resources

    Configuration Guides:

    • Prerequisites documentation - Detailed setup requirements

    • Environment parameters guide - Configuration instructions

    • API guidelines - Integration patterns and endpoints

    API Integration: Guardian's RESTful APIs enable integration with existing monitoring systems, data collection platforms, and verification tools for seamless workflow incorporation.


    Related Resources

    • Guardian Architecture - Technical platform overview

    • Guardian Installation Guide - Setup instructions

    • VM0033 Methodology - Source methodology document

    • Policy Workflow Engine - Core digitization capabilities

    Foundation Complete: You now understand methodology digitization concepts and Guardian's role in it. Chapter 2 will provide the VM0033 domain knowledge needed before we begin technical implementation.

    curl 'https://guardianservice.app/api/v1/accounts/access-token' \
      -H 'sec-ch-ua-platform: "macOS"' \
      -H 'Referer: https://guardianservice.app/login' \
      -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36' \
      -H 'Accept: application/json, text/plain, */*' \
      -H 'sec-ch-ua: "Not;A=Brand";v="99", "Google Chrome";v="139", "Chromium";v="139"' \
      -H 'Content-Type: application/json' \
      -H 'sec-ch-ua-mobile: ?0' \
      --data-raw '{"refreshToken":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjAzMDU2OWVkLThjZWQtNGVmNS05ZjBlLTgwNDAwNjJhMWZiOCIsIm5hbWUiOiJnYXV0YW0iLCJleHBpcmVBdCI6MTc4ODkzNTYwMzYxMywiaWF0IjoxNzU3Mzk5NjAzfQ.JiaVXown792eHo2qxA2_d7VTrLdIL9zIPZ0UI-gZBtGn6ddSIVWsgwO2VRjGEsOHiymQNe8G4o8EwR79StZcfvz762ra52St38Gy9f_MQwVWCLv42oxqPTT8xTep41nnJoZbk85NQSR2rC6zrih4gV6Ue1MIj80TpJfwWC0Lz_4"}'
    https://guardianservice.app/api/v1/accounts/loginByEmail
    # VM0033 Policy ID from dry-run URL or policy JSON
    POLICY_ID="689d5badaf8487e6c32c8a2a"
    
    # PDD Submission endpoint
    POST https://guardianservice.app/api/v1/policies/689d5badaf8487e6c32c8a2a/blocks/55df4f18-d3e5-4b93-af87-703a52c704d6
    {Pass bearer token in Authorization header}
    {With request Body - available in artifacts as [PDD_MR_request_body.json](../../_shared/artifacts/PDD_MR_request_body.json) }
    
    # Monitoring Report submission endpoint
    POST https://guardianservice.app/api/v1/policies/689d5badaf8487e6c32c8a2a/blocks/53caa366-4c21-46ff-b16d-f95a850f7c7c
    {Pass bearer token in Authorization header}
    {With request body - available in artifacts as [PDD_MR_request_body.json](../../_shared/artifacts/PDD_MR_request_body.json) }
    
    // Create virtual users for automated testing
    async function createVirtualUsers(policyId, authToken) {
      const endpoint = `https://guardianservice.app/api/v1/policies/${policyId}/dry-run/user`;
    
      // Create Project Developer virtual user
      const projectDeveloper = await fetch(endpoint, {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
          'Authorization': `Bearer ${authToken}`
        },
        body: JSON.stringify({
          role: 'Project_Proponent'
        })
      });
    
      // Create VVB virtual user
      const vvbUser = await fetch(endpoint, {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
          'Authorization': `Bearer ${authToken}`
        },
        body: JSON.stringify({
          role: 'VVB'
        })
      });
    
      return {
        projectDeveloper: await projectDeveloper.json(),
        vvb: await vvbUser.json()
      };
    }
    
    // Login virtual users and get their tokens
    async function loginVirtualUser(policyId, virtualUser, authToken) {
      const endpoint = `https://guardianservice.app/api/v1/policies/${policyId}/dry-run/login`;
    
      const response = await fetch(endpoint, {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
          'Authorization': `Bearer ${authToken}`
        },
        body: JSON.stringify({
          did: virtualUser.did
        })
      });
    
      return response.json();
    }
    // Automated VM0033 workflow execution - This is just a sample code(not tested)
    class VM0033WorkflowAutomation {
      constructor(policyId, ownerToken) {
        this.policyId = policyId;
        this.ownerToken = ownerToken;
        this.virtualUsers = {};
      }
    
      // Initialize dry-run environment
      async initializeDryRun() {
        // Set policy to dry-run mode
        await fetch(`https://guardianservice.app/api/v1/policies/${this.policyId}/dry-run`, {
          method: 'PUT',
          headers: { 'Authorization': `Bearer ${this.ownerToken}` }
        });
    
        // Create virtual users
        this.virtualUsers = await createVirtualUsers(this.policyId, this.ownerToken);
    
        // Login virtual users
        this.virtualUsers.projectDeveloperToken = await loginVirtualUser(
          this.policyId,
          this.virtualUsers.projectDeveloper,
          this.ownerToken
        );
    
        this.virtualUsers.vvbToken = await loginVirtualUser(
          this.policyId,
          this.virtualUsers.vvb,
          this.ownerToken
        );
      }
    
      // Execute complete project lifecycle
      async executeCompleteWorkflow() {
        try {
          // Step 1: Project Developer submits PDD
          const pddResult = await this.submitPDD();
          console.log('PDD submitted:', pddResult.id);
    
          // Step 2: VVB registers for validation
          const vvbResult = await this.registerVVB();
          console.log('VVB registered:', vvbResult.id);
    
          // Step 3: VVB validates project and submits validation report
          const validationResult = await this.submitValidationReport(pddResult.id);
          console.log('Validation completed:', validationResult.id);
    
          // Step 4: Project Developer submits monitoring reports
          const monitoringResults = await this.submitMonitoringReports(pddResult.id);
          console.log('Monitoring reports submitted:', monitoringResults.length);
    
          // Step 5: VVB verifies monitoring and submits verification report
          const verificationResult = await this.submitVerificationReport(monitoringResults[0].id);
          console.log('Verification completed:', verificationResult.id);
    
          // Step 6: Get final artifacts and token information
          const artifacts = await this.getArtifacts();
          console.log('Workflow completed with artifacts:', artifacts.length);
    
          return {
            pdd: pddResult,
            validation: validationResult,
            monitoring: monitoringResults,
            verification: verificationResult,
            artifacts: artifacts
          };
    
        } catch (error) {
          console.error('Workflow execution failed:', error);
          throw error;
        }
      }
    
      async submitPDD() {
        return submitPDD(this.policyId, vm0033PddData, this.virtualUsers.projectDeveloperToken.accessToken);
      }
    
      async registerVVB() {
        const blockId = 'aeab02d2-d7fc-4d7a-93a5-947855da95c7'; // VVB registration block
        const endpoint = `https://guardianservice.app/api/v1/policies/${this.policyId}/blocks/${blockId}`;
    
        const vvbData = {
          document: {
            vvb_details: {
              organization_name: "Automated Testing VVB",
              accreditation_scope: "Wetland restoration methodologies",
              lead_auditor: "API Test Lead"
            },
            capabilities: {
              vm0033_experience: true,
              wetland_expertise: true,
              site_visit_capability: true
            }
          }
        };
    
        const response = await fetch(endpoint, {
          method: 'POST',
          headers: {
            'Content-Type': 'application/json',
            'Authorization': `Bearer ${this.virtualUsers.vvbToken.accessToken}`
          },
          body: JSON.stringify(vvbData)
        });
    
        return response.json();
      }
    
      async getArtifacts() {
        const endpoint = `https://guardianservice.app/api/v1/policies/${this.policyId}/dry-run/artifacts`;
        const response = await fetch(endpoint, {
          headers: { 'Authorization': `Bearer ${this.ownerToken}` }
        });
        return response.json();
      }
    }
    
    // Execute automated workflow
    const workflow = new VM0033WorkflowAutomation('689d5badaf8487e6c32c8a2a', ownerToken);
    workflow.initializeDryRun()
      .then(() => workflow.executeCompleteWorkflow())
      .then(results => console.log('Complete workflow executed:', results))
      .catch(error => console.error('Workflow failed:', error));
    // cypress/integration/vm0033-methodology.spec.js
    describe('VM0033 Methodology End-to-End Testing', () => {
      let policyId = '689d5badaf8487e6c32c8a2a';
      let authTokens = {};
    
      beforeEach(() => {
        // Login and get authentication tokens
        cy.login('standard_registry', 'password').then((token) => {
          authTokens.owner = token;
        });
      });
    
      it('should execute complete VM0033 workflow via API', () => {
        // Initialize dry-run mode
        cy.request('PUT', `/api/v1/policies/${policyId}/dry-run`, {}, {
          headers: { 'Authorization': `Bearer ${authTokens.owner}` }
        }).then((response) => {
          expect(response.status).to.eq(200);
        });
    
        // Create virtual users
        cy.request('POST', `/api/v1/policies/${policyId}/dry-run/user`, {
          role: 'Project_Proponent'
        }, {
          headers: { 'Authorization': `Bearer ${authTokens.owner}` }
        }).then((response) => {
          const virtualUser = response.body;
    
          // Submit PDD using virtual user
          const pddData = {
            document: {
              project_details: {
                G5: 'Cypress Test Project',
                project_description: 'Automated test wetland restoration'
              }
            }
          };
    
          cy.request('POST', `/api/v1/policies/${policyId}/blocks/aaa78a11-c00b-4669-9022-bd2971504d70`, pddData, {
            headers: { 'Authorization': `Bearer ${virtualUser.accessToken}` }
          }).then((pddResponse) => {
            expect(pddResponse.status).to.eq(200);
            expect(pddResponse.body).to.have.property('id');
          });
        });
      });
    
      it('should validate calculation accuracy against test artifacts', () => {
        // Load VM0033 test case data
        cy.fixture('vm0033-test-case.json').then((testData) => {
          // Submit test data and verify calculations
          cy.request('POST', `/api/v1/policies/${policyId}/blocks/aaa78a11-c00b-4669-9022-bd2971504d70`, {
            document: testData.input
          }, {
            headers: { 'Authorization': `Bearer ${authTokens.owner}` }
          }).then((response) => {
            // Verify calculation results match expected values
            const calculatedValues = response.body.calculatedValues;
            expect(calculatedValues.baseline_emissions).to.be.closeTo(testData.expected.baseline_emissions, 0.01);
            expect(calculatedValues.project_emissions).to.be.closeTo(testData.expected.project_emissions, 0.01);
            expect(calculatedValues.net_emission_reductions).to.be.closeTo(testData.expected.net_emission_reductions, 0.01);
          });
        });
      });
    
      it('should handle concurrent user operations', () => {
        // Test multiple users submitting data simultaneously
        const userPromises = [];
    
        for (let i = 0; i < 5; i++) {
          const promise = cy.request('POST', `/api/v1/policies/${policyId}/dry-run/user`, {
            role: 'Project_Proponent'
          }, {
            headers: { 'Authorization': `Bearer ${authTokens.owner}` }
          }).then((userResponse) => {
            return cy.request('POST', `/api/v1/policies/${policyId}/blocks/aaa78a11-c00b-4669-9022-bd2971504d70`, {
              document: { project_details: { G5: `Concurrent Project ${i}` } }
            }, {
              headers: { 'Authorization': `Bearer ${userResponse.body.accessToken}` }
            });
          });
    
          userPromises.push(promise);
        }
    
        // Verify all concurrent submissions succeed
        cy.wrap(Promise.all(userPromises)).then((responses) => {
          responses.forEach((response, index) => {
            expect(response.status).to.eq(200);
            expect(response.body).to.have.property('id');
          });
        });
      });
    });

    Chapter 20: Guardian Tools Architecture and Implementation

    Building standardized calculation tools using Guardian's extractDataBlock and customLogicBlock mini-policy pattern

    This chapter details how to build Guardian Tools - think of them as mini policies that implement standardized calculation methodologies like CDM AR Tools. Using AR Tool 14 as our example, you'll learn the complete architecture for creating reusable calculation tools that can be integrated into any environmental methodology.

    Learning Objectives

    After completing this chapter, you will be able to:

    • Understand Guardian's Tools architecture as re-usable mini policies with data extraction and calculation blocks

    • Analyze AR Tool 14's production implementation in Guardian format

    • Build extractDataBlock workflows for schema input/output operations

    • Implement standardized calculation logic using customLogicBlock

    • Create modular, reusable tools for integration across multiple methodologies

    • Test and validate tool calculations against methodology test artifacts

    Prerequisites

    • Completed Chapter 18: Custom Logic Block Development

    • Understanding of Guardian workflow blocks from Part IV

    • Access to AR Tool 14 artifacts: and

    • Familiarity with extractDataBlock documentation

    What is AR Tool 14?

    AR Tool 14 is a CDM (Clean Development Mechanism) methodological tool for "Estimation of carbon stocks and change in carbon stocks of trees and shrubs in A/R CDM project activities." It provides standardized methods for:

    Primary Purpose

    • Tree biomass estimation using allometric equations, sampling plots, or proportionate crown cover

    • Shrub biomass estimation based on crown cover measurements

    • Carbon stock changes calculated between two points in time or as annual changes

    • Uncertainty management with discount factors for conservative estimates

    Key Calculation Methods

    From the , the tool supports multiple approaches:

    1. Measurement of sample plots - Stratified random sampling and double sampling

    2. Modelling approaches - Tree growth and stand development models

    3. Proportionate crown cover - For sparse vegetation scenarios

    4. Direct change estimation - Re-measurement of permanent plots

    Guardian Tools Architecture

    Mini-Policy Pattern

    Guardian Tools usually follow a three-block pattern:

    Block Flow Architecture

    The Tool workflow follows this pattern:

    1. Input Event → get_ar_tool_14 (extractDataBlock)

    2. Data Processing → calc_ar_tool_14 (customLogicBlock)

    3. Output Event → set_ar_tool_14 (extractDataBlock)

    extractDataBlock: Data Input/Output Engine

    Understanding extractDataBlock

    The extractDataBlock is Guardian's mechanism for working with embedded schema data. From the documentation:

    "This block is used for VC documents which are based on (or 'conform to') a schema which contains embedded schemas, extractDataBlock provides means to extract a data set which corresponds to any of these embedded schemas (at any depth level), and if required after processing to return the updated values back into the VC dataset to their original 'place'."

    AR Tool 14 Schema Integration

    In our AR Tool 14 implementation, the extractDataBlock references schema #632fd070-d788-49ae-889b-cd281c6c7194&1.0.0 which is published version of Tool 14 schema. You can see schema excel within :

    This extracts the AR Tool 14 input data structure from the parent document, containing parameters like:

    • Tree measurements - DBH, height, species data

    • Plot information - Area, sampling design, stratum details

    • Calculation methods - Selected approaches for biomass estimation

    • Uncertainty parameters - Confidence levels and discount factors

    Data Extraction Process

    When a policy workflow calls the AR Tool 14, the extraction process works as follows:

    customLogicBlock: AR Tool 14 Calculation Engine

    Production JavaScript Implementation

    The AR Tool 14 customLogicBlock contains the actual calculation engine. From our artifact, here's the implementation structure:

    Stratified Random Sampling Implementation

    Code for stratified random sampling from AR Tool 14:

    Uncertainty Management System

    AR Tool 14 also implements a sophisticated uncertainty discount system:

    Building Your Own Tool

    Step 1: Define Tool Schema

    First, create a schema that captures all the input parameters for your calculation methodology:

    Step 2: Implement Tool Policy Structure

    Create the three-block tool structure:

    Step 3: Implement Calculation Logic

    Build your customLogicBlock calculation function following the Guardian pattern:

    Tool Integration in Parent Policies

    Calling Tools from Methodologies

    Guardian Tools are designed to be called from parent methodology policies. Here's how VM0033 would integrate AR Tool 14:

    Tool Event Configuration

    Tools communicate with parent policies through Guardian's event system:

    Testing and Validation Framework

    Unit Testing Tool Calculations

    Test individual calculation functions against methodology test cases:

    Best Practices for Guardian Tools

    Design Principles

    1. Single Responsibility: Each tool should implement exactly one methodology or calculation standard

    2. Schema Clarity: Design clear, well-documented input/output schemas

    3. Modular Architecture: Break complex calculations into testable functions

    4. Error Resilience: Handle edge cases and invalid inputs gracefully

    Chapter Summary

    Guardian Tools provide a powerful architecture for implementing standardized calculation methodologies as reusable mini policies. Key concepts:

    • Tools are like mini policies that follow the extractDataBlock → customLogicBlock → extractDataBlock pattern

    • AR Tool 14 demonstrates complete implementation of complex biomass calculations with uncertainty management

    • extractDataBlock handles schema-based data input and output operations automatically

    • customLogicBlock contains the actual methodology calculation logic in JavaScript

    Next Steps

    Chapter 21 will demonstrate comprehensive testing and validation frameworks for custom logic blocks for individual tools and complete policy.

    References and Further Reading

    • - Complete tool policy configuration

    • - Original CDM methodology document

    • Guardian extractDataBlock Documentation


    Tool Building Success: You now understand how to build complete Guardian Tools using the extractDataBlock and customLogicBlock pattern. The AR Tool 14 example provides a production-ready template for implementing any standardized calculation methodology in Guardian.

    Chapter 16: Advanced Policy Patterns

    Exploring advanced Guardian policy features for production methodologies including external data integration, document validation, API transformation, and policy testing

    Building on VM0033's implementation patterns from Chapter 15, Chapter 16 explores advanced features that enable production-scale policy deployment. These patterns handle external data integration, document validation, API transformations, and testing workflows essential for real-world carbon credit programs.

    1. Data Transformation Blocks for API Integration

    Verra Project Hub API Integration

    VM0033 implements a dataTransformationAddon that converts Guardian project submissions into Verra's Project Hub compatible API payloads, enabling automatic project registration with external registries.

    VM0033 Project Description Transformation Block

    The transformation block in VM0033 (tag: project-description) demonstrates how Guardian can transform internal project data into external API formats:

    Key Transformation Features:

    1. API Compatibility: Creates Verra Project Hub API-compatible JSON structure

    2. Data Mapping: Maps Guardian schema fields to external registry requirements

    3. Standard Integration: Handles VCS and CCB standard-specific fields

    4. Default Values: Sets appropriate defaults for registry submission status

    Implementation Pattern:

    Implementation Use Cases

    Carbon Registry Integration:

    • Automatic project listing with Verra, Gold Standard, or other registries

    • Real-time status synchronization between Guardian and external systems

    • Standardized data exchange for multi-registry projects

    Corporate Reporting:

    • Transform carbon project data for corporate sustainability reporting

    • Generate API payloads for ESG reporting platforms

    • Create standardized data formats for carbon accounting systems


    2. Document Validation Blocks

    Guardian's documentValidatorBlock ensures document integrity and compliance throughout policy workflows. This block validates document structure, content, and relationships before processing continues.

    Document Validation Architecture

    Validation Types:

    1. Schema Validation: Ensures documents conform to defined JSON schemas

    2. Ownership Validation: Verifies document ownership and assignment rules

    3. Content Validation: Checks specific field values and business logic

    4. Relationship Validation: Validates links between related documents

    Condition Types:

    Type
    Description
    Example Use Case

    Practical Validation Examples

    Project Eligibility Validation:

    VVB Assignment Validation:

    Note: While VM0033 doesn't use documentValidatorBlock in its current implementation, it relies on other validation mechanisms including documentsSourceAddon filters and customLogicBlock validations to ensure document integrity.


    3. External Data Integration

    Guardian's externalDataBlock enables policies to integrate with external APIs and data providers for real-time environmental monitoring and verification.

    External Data Block Architecture

    Example 1: Kanop Environmental Data Integration

    Kanop provides satellite-based MRV technology for nature-based carbon projects. Integration enables automatic data retrieval for biomass monitoring, forest cover analysis, and carbon stock assessments. External data block can be used to integrate and get data from Kanop.

    Example 2: IoT Device Integration for Cookstove Projects

    For metered cookstove projects, external data blocks can integrate with IoT devices to collect real-time usage data:

    IoT Data Processing:

    Real-Time Data Validation

    External data integration includes validation mechanisms to ensure data quality:


    4. Policy Testing Framework

    Guardian provides robust testing capabilities for policy validation before production deployment, including manual dry-run testing and programmatic test automation.

    Dry-Run Mode Testing

    Dry-run mode enables complete policy testing as the name suggests. Policy developer can take up different roles and simulate the entire process end to end to verify everything works.

    Starting Dry-Run Mode:

    You can trigger dry run either via policy editor UI or API

    Dry-Run Features:

    1. Virtual Users: Create test users without real Hedera accounts

    2. Mock Transactions: Simulate blockchain transactions locally

    3. Local Storage: Store all documents and artifacts in database

    4. Full Workflow: Test complete certification workflows

    Dry-Run Workflow Operations

    Key Operations Available in Dry-Run Mode:

    1. Restart: Reset policy state and remove all previous dry-run records

    2. View Transactions: Examine mock blockchain transactions

    3. View Artifacts: Review all generated documents

    4. View IPFS Files: Check files that would be stored in IPFS

    Programmatic Policy Testing

    Guardian supports automated policy testing with predefined test scenarios and expected outcomes.

    Adding Test Cases:

    Tests are embedded in policy files and executed programmatically:

    Running Automated Tests:

    Test Result Analysis

    Test Failure Analysis

    When tests fail, Guardian provides detailed comparison and debugging information:

    Testing Best Practices:

    1. Test Coverage Strategy: Test each stakeholder workflow independently, validate all document state transitions, test error handling and edge cases

    2. Test Data Management: Create realistic test datasets matching production scenarios, use boundary value testing for numerical inputs

    3. Continuous Testing: Run tests after each policy modification, automate testing in CI/CD pipelines


    5. Demo Mode for Simplified Testing

    Guardian provides Demo Mode as a simplified approach to policy testing, particularly useful for novice users and quick policy validation. Demo mode is selected during policy import.

    Demo Mode Features

    Demo Mode operates similarly to dry-run but with enhanced user interface simplification:

    • Read-Only Policy Processing: All policy processing is read-only, policy editing is not possible

    • No External Communication: No communication with external systems such as Hedera network or IPFS

    • Simplified UI: Streamlined interface designed for ease of use

    • Local Storage: All artifacts stored locally similar to dry-run mode

    Summary

    Chapter 16 demonstrated Guardian's advanced policy patterns essential for production deployment:

    1. Data Transformation: VM0033's project-description transformation block converts Guardian project data to Verra API-compatible formats for automatic registry integration

    2. Document Validation: documentValidatorBlock provides robust validation with condition-based rules for ensuring document integrity and business logic compliance

    3. External Data Integration: externalDataBlock enables integration with providers like Kanop for satellite monitoring and IoT devices for real-time environmental data

    These patterns enable Guardian policies to integrate with real-world carbon markets, environmental monitoring systems, and corporate reporting platforms while maintaining data integrity and audit trails.

    Next Steps: Part V covers the calculation logic implementation, diving deep into methodology-specific emission reduction calculations and the JavaScript calculation engine that powers Guardian's environmental accounting.

    Prerequisites Check: Ensure you have:

    Time Investment: ~25 minutes reading + ~90 minutes hands-on testing with dry-run mode

    Practical Exercises:

    1. Dry-Run Testing: Import and set up VM0033 in dry-run mode and test complete project lifecycle

    2. External Data Integration: Configure external data block for your methodology's monitoring requirements

    3. Document Validation: Implement validation rules for your specific business logic

    4. API Transformation: Create transformation block for your target registry's API format

    Chapter Outlines

    Purpose: Establish the foundation for understanding methodology digitization on Guardian platform.

    Key Topics:
    • What is methodology digitization and why it matters

    • Guardian platform's role in environmental asset tokenization

    • Overview of the digitization process from PDF to working policy

    • VM0033 as our reference case study - why it was chosen

    • Benefits of digitization: transparency, efficiency, automation

    • Common challenges and how this handbook addresses them

    • Setting up your development environment

    VM0033 Context: Introduction to VM0033's significance in blue carbon markets and its complexity as a comprehensive tidal wetland restoration methodology.

    Chapter 2: Understanding VM0033 Methodology

    Purpose: Provide deep domain knowledge of VM0033 before beginning technical implementation.

    Key Topics:

    • VM0033 scope and applicability conditions

    • Baseline scenario determination for tidal wetlands

    • Project activities and intervention types

    • Key stakeholders and their roles in wetland restoration

    • Emission sources and carbon pools covered

    • Monitoring requirements and verification processes

    • Relationship to other VCS methodologies and CDM tools

    VM0033 Context: Complete walkthrough of the methodology document structure, highlighting sections that will be digitized and their interdependencies.

    Chapter 3: Guardian Platform Overview for Methodology Developers

    Purpose: Provide methodology developers with Guardian-specific knowledge needed for digitization.

    Key Topics:

    • Guardian architecture: services, APIs, and data flow

    • Policy Workflow Engine (PWE) fundamentals

    • Schema system and Verifiable Credentials

    • Hedera Hashgraph integration and immutable records

    • User roles and permissions model

    • IPFS integration for document storage

    • Guardian UI components and user experience

    VM0033 Context: How VM0033's complexity maps to Guardian's capabilities and architectural patterns.

    Part II: Analysis and Planning

    Chapter 4: Methodology Analysis and Decomposition

    Purpose: Teach systematic approach to analyzing methodology documents for digitization.

    Key Topics:

    • Structured reading techniques for methodology PDFs

    • Identifying workflow stages and decision points

    • Mapping stakeholder interactions and document flows

    • Extracting data requirements and validation rules

    • Understanding temporal boundaries and crediting periods

    • Identifying calculation dependencies and parameter relationships

    VM0033 Context: Step-by-step analysis of VM0033 document, breaking down its content into digestible components and identifying digitization priorities.

    Chapter 5: Equation Mapping and Parameter Identification

    Purpose: Master the process of extracting and organizing all mathematical components of a methodology.

    Key Topics:

    • Recursive equation analysis starting from final emission reduction formula

    • Parameter classification: monitored vs. non-monitored vs. user-input

    • Building parameter dependency trees

    • Identifying default values and lookup tables

    • Handling conditional calculations and alternative methods

    • Creating calculation flowcharts and documentation

    VM0033 Context: Complete mapping of VM0033's emission reduction equations, including baseline emissions, project emissions, and leakage calculations with all parameter dependencies.

    Chapter 6: Tools and Modules Integration

    Purpose: Handle external tools and modules that methodologies reference.

    Key Topics:

    • Understanding CDM tools and VCS modules

    • Integrating AR-Tool14 for biomass calculations

    • Incorporating VMD modules for specific calculations

    • Handling tool versioning and updates

    • Creating unified calculation frameworks

    • Managing tool dependencies and conflicts

    VM0033 Context: Integration of subset of tools referenced in VM0033, limited to AR-Tool05, AR-Tool14, AFLOU Non permanence risk tool.

    Chapter 7: Test Artifact Development

    Purpose: Create comprehensive test cases that validate the digitized methodology.

    Key Topics:

    • Designing test scenarios covering all methodology pathways

    • Creating input parameter datasets for testing

    • Establishing expected output benchmarks

    • Building validation spreadsheets with all calculations

    • Documenting test cases and acceptance criteria

    • Version control for test artifacts

    VM0033 Context: Development of complete VM0033 test spreadsheet with multiple project scenarios, covering different wetland types, restoration activities, and calculation methods.

    Part III: Schema Design and Development

    Chapter 8: Schema Architecture and Foundations

    Purpose: Understand Guardian's schema system fundamentals and architectural patterns.

    Key Topics:

    • Guardian's JSON Schema integration with Verifiable Credentials

    • Two-part schema architecture (Project Description + Calculations)

    • Field type selection and parameter mapping principles

    • Schema template structure and organization

    • Basic conditional logic and field visibility

    • Performance considerations for schema design

    VM0033 Context: VM0033's two-part architecture demonstrating how complex wetland restoration methodology translates into Guardian schema structure with 400+ components.

    Chapter 9: Project Design Document (PDD) Schema Development

    Purpose: Build comprehensive PDD schemas using Excel-first approach with step-by-step implementation.

    Key Topics:

    • Excel schema template usage and structure

    • Step-by-step PDD schema construction process

    • Conditional logic implementation with enum selections

    • Sub-schema creation and organization

    • Field key management for calculation code readability

    • Guardian import process and testing

    VM0033 Context: Complete walkthrough of building VM0033 PDD schema from Excel template, including certification pathway conditionals and calculation parameter capture.

    Chapter 10: Monitoring Report Schema Development

    Purpose: Create time-series monitoring schemas that handle annual data collection and calculation updates.

    Key Topics:

    • Temporal data structures for monitoring periods

    • Annual parameter tracking and time-series organization

    • Quality control fields and evidence documentation

    • Field key management for time-series calculations

    • VVB verification workflow support

    • Integration with PDD schema parameters

    VM0033 Context: VM0033 monitoring schema development covering herbaceous vegetation monitoring, carbon stock tracking, and temporal boundary management over 100-year crediting periods.

    Chapter 11: Advanced Schema Techniques

    Purpose: Master API schema management, field properties, and advanced Guardian features.

    Key Topics:

    • API-based schema operations and updates

    • Field key naming best practices for calculation code

    • Standardized Property Definitions from GBBC specifications

    • Four Required field types: None, Hidden, Required, Auto Calculate

    • Schema UUID management for efficient development

    • Bulk operations and version control strategies

    VM0033 Context: Advanced schema management techniques used in VM0033 development, including Auto Calculate field implementation for equation results and UUID management for policy integration.

    Chapter 12: Schema Testing and Validation Checklist

    Purpose: Validate schemas using Guardian's testing features before deployment.

    Key Topics:

    • Default Values, Suggested Values, and Test Values configuration

    • Schema preview testing and functionality validation

    • UUID integration into policy workflow blocks

    • Test artifact completeness checking

    • Field validation rules and user experience optimization

    • Pre-deployment checklist and user testing

    VM0033 Context: Practical testing approach used for VM0033 schema validation, including systematic testing of conditional logic and calculation field behavior.

    Part IV: Policy Workflow Design

    Chapter 13: Policy Workflow Architecture and Design Principles

    Purpose: Establish foundational understanding of Guardian policy architecture and design patterns for environmental methodology implementation.

    Key Topics:

    • Guardian policy architecture fundamentals and component overview

    • Event-driven workflow block communication system

    • Policy lifecycle management and versioning strategies

    • Hedera blockchain integration for immutable audit trails

    • Document flow design patterns and state management

    • Security considerations and access control architecture

    VM0033 Context: Guardian policy architecture analysis using VM0033 production implementation as reference for tidal wetland restoration methodology digitization.

    Chapter 14: Guardian Workflow Blocks and Configuration

    Purpose: Master Guardian's workflow block system for building environmental certification workflows.

    Key Topics:

    • interfaceDocumentsSourceBlock for document management and filtering

    • buttonBlock configurations for user interactions and workflow transitions

    • requestVcDocumentBlock for data collection and schema integration

    • sendToGuardianBlock for data persistence and blockchain storage

    • Role-based permissions and access control implementation

    • Event-driven communication between workflow blocks

    VM0033 Context: Complete workflow block configuration using VM0033 production policy JSON, covering project submission, VVB approval, and document management workflows.

    Chapter 15: VM0033 Implementation Deep Dive

    Purpose: Deep technical analysis of VM0033 policy implementation using actual Guardian production configurations.

    Key Topics:

    • VVB document approval workflow with real JSON configurations

    • Project submission and review processes using Guardian blocks

    • Role-based workflow analysis (Project_Proponent, VVB, Owner)

    • Document filtering and status management implementations

    • Button configuration patterns for workflow transitions

    • End-to-end integration patterns and event routing

    VM0033 Context: Complete analysis of VM0033 production policy JSON with extracted block configurations, focusing on real-world implementation patterns for tidal wetland restoration certification.

    Chapter 16: Advanced Policy Patterns

    Purpose: Advanced Guardian policy implementation patterns using production VM0033 configurations.

    Key Topics:

    • Transformation blocks for external API integration (Verra project hub)

    • Document validation blocks for data integrity and business rule enforcement

    • External data integration patterns (Kanop satellite monitoring, IoT devices)

    • Policy testing frameworks including dry-run mode and programmatic testing

    • Demo mode configuration for training and development environments

    • Production deployment patterns and error handling strategies

    VM0033 Context: Real implementation examples from VM0033 production policy including dataTransformationAddon for Verra API integration, documentValidatorBlock configurations, and comprehensive testing approaches.

    Part V: Calculation Logic Implementation

    Chapter 17: (Reserved for Part IV completion)

    Purpose: Reserved for additional Part IV content.

    Chapter 18: Custom Logic Block Development

    Purpose: Implement emission reduction calculations using JavaScript in Guardian's customLogicBlock.

    Key Topics:

    • Guardian customLogicBlock architecture and JavaScript execution environment

    • Document input/output handling with credentialSubject field access

    • VM0033 baseline emissions, project emissions, and net emission reduction calculations

    • Schema field integration and Auto Calculate field implementation

    • Error handling and validation within calculation blocks

    • Testing calculation logic outside and within Guardian environment

    VM0033 Context: Complete implementation of VM0033 emission reduction calculations using real production JavaScript from er-calculations.js artifact, including field mapping to PDD and monitoring report schemas.

    Chapter 19: Formula Linked Definitions (FLDs)

    Purpose: Brief foundation chapter establishing FLD concepts for parameter relationship management in Guardian methodologies.

    Key Topics:

    • FLD concept and basic architectural understanding

    • Parameter reuse across multiple schema documents in policy workflows

    • VM0033 parameter relationship examples suitable for FLD implementation

    • Integration patterns with customLogicBlock calculations

    • Basic design principles for FLD frameworks

    VM0033 Context: Concise overview establishing FLD concepts with VM0033 parameter relationship examples, focusing on foundational understanding rather than detailed implementation.

    Chapter 20: Guardian Tools Architecture and Implementation

    Purpose: Build Guardian Tools using extractDataBlock and customLogicBlock patterns, with AR Tool 14 as practical example.

    Key Topics:

    • Guardian Tools architecture as mini-policies with three-block pattern

    • ExtractDataBlock workflows for schema-based data input/output operations

    • CustomLogicBlock integration for standardized calculation implementations

    • AR Tool 14 complete implementation with stratified random sampling

    • Tool versioning, schema evolution, and production deployment patterns

    • Tool integration patterns for use across multiple methodologies

    VM0033 Context: Real AR Tool 14 implementation from Guardian production artifacts showing complete biomass calculation tool that integrates with VM0033 wetland restoration methodology.

    Chapter 21: Calculation Testing and Validation

    Purpose: Comprehensive testing using Guardian's dry-run mode and customLogicBlock testing interface with VM0033 and AR Tool 14 test artifacts.

    Key Topics:

    • Guardian's customLogicBlock testing interface with three input methods (schema-based, JSON editor, file upload)

    • Interactive testing and debugging with Guardian's built-in debug() function

    • Dry-run mode for complete policy workflow testing without blockchain transactions

    • Test artifact validation using final-PDD-vc.json and official methodology spreadsheets

    • Testing at every calculation stage: baseline, project, leakage, and net ERR

    • API-based automated testing using Guardian's REST APIs and Cypress framework

    • Best practices for test data management and systematic testing approaches

    VM0033 Context: Practical testing implementation using VM0033_Allcot_Test_Case_Artifact.xlsx and final-PDD-vc.json with Guardian's testing interface, demonstrating complete validation workflow from individual calculations to full policy testing.

    Part VI: Integration and Testing

    Chapter 22: End-to-End Policy Testing

    Purpose: Testing complete methodology workflows across all stakeholder roles using Guardian's dry-run capabilities and VM0033 production patterns.

    Key Topics:

    • Multi-role testing framework with virtual user management

    • Complete stakeholder workflow simulation (Project Proponent, VVB, Standard Registry)

    • VM0033 workflow testing using policy navigation structure and role transitions

    • Production-scale data validation with large datasets and multi-year monitoring periods

    • Cross-component integration testing validating schema-workflow-calculation consistency

    • Guardian dry-run artifacts and validation procedures for methodology compliance

    VM0033 Context: Complete end-to-end testing using VM0033 policy structure, demonstrating multi-stakeholder workflows from PDD submission through VCU token issuance with role-based testing scenarios.

    Chapter 23: API Integration and Automation

    Purpose: Automating methodology operations using Guardian's REST API framework for production deployment and integration.

    Key Topics:

    • Guardian API authentication patterns with JWT tokens and refresh token management

    • VM0033 policy block API structure using real block IDs for PDD and monitoring report submission

    • Dry-run API operations with virtual user creation and management for automated testing

    • Automated workflow execution class demonstrating complete VM0033 project lifecycle via APIs

    • Cypress testing integration for automated methodology validation and regression testing

    VM0033 Context: Practical API automation using VM0033 policy endpoints, demonstrating automated data submission, virtual user workflows, and production API patterns for scalable methodology operations.

    Part VII: Deployment and Maintenance

    Chapter 24: User Management and Role Assignment

    Purpose: Set up and manage users, roles, and permissions for deployed methodologies.

    Key Topics:

    • User onboarding and account management

    • Role assignment and permission configuration

    • Organization management and multi-tenancy

    • Access control and security policies

    • User training and support procedures

    • Audit and compliance reporting

    VM0033 Context: User management for VM0033 implementation, including VVB accreditation, project developer registration, and Verra administrator roles.

    Chapter 25: Monitoring and Analytics - Guardian Indexer

    Purpose: Monitoring and analytics for deployed methodologies and data submitted via Indexer

    Key Topics:

    • Usage analytics and reporting

    • Data export and reporting capabilities

    • Compliance monitoring and audit trails

    VM0033 Context: Viewing all data on Indexer, tracking project registrations, credit issuances

    Chapter 26: Maintenance and Updates

    Purpose: Maintain and evolve deployed methodologies over time.

    Key Topics:

    • Maintenance procedures and schedules

    • Bug fixing and issue resolution

    • Methodology updates and regulatory changes

    • User feedback integration and feature requests

    • Long-term support and lifecycle planning

    VM0033 Context: Maintenance strategy for VM0033 implementation, including handling Verra methodology updates and regulatory changes.

    Part VIII: Advanced Topics and Best Practices

    Chapter 27: Integration with External Systems

    Purpose: Connect Guardian-based methodologies with external systems and services.

    Key Topics:

    • External system integration patterns

    • Data transformation via blocks

    • Data synchronization and consistency

    • Real-time data feeds and streaming (Metered Policy Example)

    VM0033 Context: Integration of VM0033 with external monitoring systems, satellite data feeds, and Verra's registry systems.

    Chapter 28: Troubleshooting and Common Issues

    Purpose: Provide solutions for common problems encountered during methodology digitization.

    Key Topics:

    • Common digitization pitfalls and solutions

    • Debugging techniques and tools

    • Data quality issues and resolution

    • User experience problems and fixes

    • Integration and compatibility issues

    VM0033 Context: Some specific troubleshooting scenarios encountered during VM0033 implementation and their solutions.


    Implementation Notes

    Each chapter will include:

    • Practical Examples: Real code, configurations, and screenshots from VM0033 implementation

    • Best Practices: Lessons learned and recommended approaches

    • Common Pitfalls: What to avoid and how to prevent issues

    • Testing Strategies: How to validate each component

    • Performance Considerations: Optimization tips and scalability guidance

    • Maintenance Notes: Long-term considerations and update strategies

    The handbook is designed to be both a learning resource and a reference guide, with clear navigation between conceptual understanding and practical implementation.

    Part I: Foundation and Preparation
    Chapter 1: Introduction to Methodology Digitization

    Performance: Optimize for large dataset processing

  • Validation: Include comprehensive uncertainty and validation logic

  • Production examples from AR Tool 14 show real implementation patterns for stratified sampling, uncertainty discounts, and error handling

  • Integration patterns enable tools to be called from parent methodology policies

  • Testing frameworks ensure calculation accuracy against methodology test artifacts

  • AR-Tool-14.json
    ar-am-tool-14-v4.1.pdf
    AR Tool 14 PDF
    PDD-schema.xlsx
    AR-Tool-14.json
    AR Tool 14 Guardian Implementation
    AR Tool 14 PDF Methodology
    Guardian customLogicBlock Documentation
    AR Tool 14 within policy editor
    Tool integration in new project submission flow

    Bulk Processing: Processes multiple documents in single transformation

    State Management: Save and restore workflow states with savepoints

    Savepoints: Create and restore workflow checkpoints for testing different scenarios

  • Policy Testing: Dry-run mode and automated testing frameworks validate complete workflows before production deployment

  • Demo Mode: Simplified testing environment for quick policy validation and novice user training

  • Equal

    Field equals specific value

    document.type = "project"

    Not Equal

    Field does not equal value

    status ≠ "Rejected"

    In

    Field value in array

    methodology ∈ [VM0033, VM0007]

    Not In

    Field value not in array

    Sample project view in Verra Project Hub
    External MRV data example - Taken from Metered energy policy
    Click dry run on top of menu bar in policy editor UI

    country ∉ [sanctioned_countries]

    {
      "blockType": "tool",
      "tag": "Tool",
      "children": [
        {
          "blockType": "extractDataBlock",
          "action": "get",
          "tag": "get_ar_tool_14"
        },
        {
          "blockType": "customLogicBlock",
          "tag": "calc_ar_tool_14"
        },
        {
          "blockType": "extractDataBlock",
          "action": "set",
          "tag": "set_ar_tool_14"
        }
      ]
    }
    {
      "blockType": "extractDataBlock",
      "action": "get",
      "schema": "#632fd070-d788-49ae-889b-cd281c6c7194&1.0.0",
      "tag": "get_ar_tool_14"
    }
    // Conceptual flow - Guardian handles this automatically
    const parentDocument = {
      document: {
        credentialSubject: [
          {
            // Parent methodology data
            project_details: {...},
    
            // Embedded AR Tool 14 schema data
            ar_tool_14_inputs: {
              scenario_type: "Project scenario",
              method_for_change_in_cs_in_trees: "Between two points of time",
              cs_in_trees_at_point_of_time: {
                method_used_for_estimating_cs_in_trees_at_a_point_of_time: "Measurement of sample plots",
                measurement_of_sample_plots: {
                  sampling_design: "Stratified random sampling",
                  stratified_random_sampling: {
                    stratified_random_sampling_variables: [...]
                  }
                }
              }
            }
          }
        ]
      }
    };
    
    // extractDataBlock extracts just the ar_tool_14_inputs portion
    // Core calculation function from AR Tool 14 production code
    function calc_ar_tool_14(document) {
        let delta_C_SHRUB = 0;
        let C_SHRUB_t = 0;
        let delta_C_TREE = 0;
        let C_TREE = 0;
    
        const method_for_change_in_cs_in_trees = document.method_for_change_in_cs_in_trees;
        const scenario_type = document.scenario_type;
    
        // Tree carbon stock change calculations
        if (method_for_change_in_cs_in_trees === 'Between two points of time') {
            const change_in_cs_in_trees_btw_two_points_of_time =
                document.change_in_cs_in_trees_btw_two_points_of_time;
    
            const method_selection = change_in_cs_in_trees_btw_two_points_of_time
                .method_selection_cs_in_trees_bwt_two_points_of_time;
    
            if (method_selection === 'Difference of two independent stock estimations') {
                delta_C_TREE = calc_difference_of_two_independent_stock(
                    change_in_cs_in_trees_btw_two_points_of_time.difference_of_two_independent_stock,
                    scenario_type
                );
            } else if (method_selection === 'Direct estimation of change by re-measurement of sample plots') {
                delta_C_TREE = calc_direct_estimation_change_via_sample_plot(
                    change_in_cs_in_trees_btw_two_points_of_time.direct_estimation_change_via_sample_plot,
                    scenario_type
                );
            }
        }
    
        // Tree carbon stock at point in time calculations
        const cs_in_trees_at_point_of_time = document.cs_in_trees_at_point_of_time;
        const method_used = cs_in_trees_at_point_of_time.method_used_for_estimating_cs_in_trees_at_a_point_of_time;
    
        if (method_used === "Measurement of sample plots") {
            const measurement_of_sample_plots = cs_in_trees_at_point_of_time.measurement_of_sample_plots;
    
            if (measurement_of_sample_plots.sampling_design === "Stratified random sampling") {
                C_TREE = calc_stratified_random_sampling(
                    measurement_of_sample_plots.stratified_random_sampling,
                    scenario_type
                );
            } else {
                C_TREE = calc_double_sampling(
                    measurement_of_sample_plots.double_sampling,
                    scenario_type
                );
            }
        }
    
        return Object.assign(document, {
            delta_C_SHRUB: delta_C_SHRUB,
            C_SHRUB_t: C_SHRUB_t,
            delta_C_TREE: delta_C_TREE,
            C_TREE: C_TREE
        });
    }
    // Real implementation from AR Tool 14 production code
    function calc_stratified_random_sampling(document, scenario) {
        let discount = 0;
        const stratified_random_sampling_variables = document.stratified_random_sampling_variables;
    
        // Calculate mean biomass per stratum
        stratified_random_sampling_variables.forEach((variable) => {
            const sum = variable.b_TREE_p_i.reduce(
                (accumulator, currentValue) => accumulator + currentValue, 0
            );
            const total_sample_plot = variable.b_TREE_p_i.length;
            variable.b_TREE_i = sum / total_sample_plot;
    
            // Calculate variance
            const sumOfSquares = variable.b_TREE_p_i.reduce(
                (accumulator, currentValue) => accumulator + Math.pow(currentValue, 2), 0
            );
            const numerator = total_sample_plot * sumOfSquares - Math.pow(sum, 2);
            const denominator = total_sample_plot * (total_sample_plot - 1);
            variable.S_2_i = numerator / denominator;
        });
    
        // Weighted mean calculation
        const w_i_array = stratified_random_sampling_variables.map(variable => variable.w_i);
        const b_TREE_i_array = stratified_random_sampling_variables.map(variable => variable.b_TREE_i);
        document.b_TREE = sumProduct(w_i_array, b_TREE_i_array);
    
        // Uncertainty calculation
        const S_2_i_array = stratified_random_sampling_variables.map(variable => variable.S_2_i / variable.b_TREE_p_i.length);
        const w_2_i_array = w_i_array.map(variable => Math.pow(variable, 2));
        const summationForUncertainty = sumProduct(w_2_i_array, S_2_i_array);
        const sqrtSummation = Math.sqrt(summationForUncertainty);
    
        document.u_c = safeDivide(document.t_VAL * sqrtSummation, document.b_TREE);
        document.B_TREE = document.A * document.b_TREE;
    
        // Convert to carbon
        let C_tree = (44 / 12) * document.CF_TREE * document.B_TREE;
        const relative_uncertainty = safeDivide(document.u_c, C_tree) * 100;
    
        // Apply uncertainty discount
        const applied_discount = getDiscount(relative_uncertainty);
        if (applied_discount !== null) {
            discount = (applied_discount * document.u_c) / 100;
        }
    
        return scenario === 'Project scenario' ? C_tree - discount : C_tree + discount;
    }
    // Uncertainty discount system from production code
    function getDiscount(uncertainty) {
        if (uncertainty <= 10) {
            return 0; // 0% discount
        } else if (uncertainty > 10 && uncertainty <= 15) {
            return 25; // 25% discount
        } else if (uncertainty > 15 && uncertainty <= 20) {
            return 50; // 50% discount
        } else if (uncertainty > 20 && uncertainty <= 30) {
            return 75; // 75% discount
        } else if (uncertainty > 30) {
            return 100; // 100% discount - conservative estimate
        } else {
            return null; // Invalid uncertainty value
        }
    }
    {
      "type": "object",
      "properties": {
        "scenario_type": {
          "type": "string",
          "enum": ["Baseline scenario", "Project scenario"]
        },
        "method_for_change_in_cs_in_trees": {
          "type": "string",
          "enum": ["Between two points of time", "In a year"]
        },
        "cs_in_trees_at_point_of_time": {
          "type": "object",
          "properties": {
            "method_used_for_estimating_cs_in_trees_at_a_point_of_time": {
              "type": "string",
              "enum": ["Measurement of sample plots", "Proportionate crown cover"]
            }
          }
        }
      }
    }
    {
      "name": "Tool Name",
      "description": "Tool description from methodology PDF",
      "config": {
        "blockType": "tool",
        "tag": "Tool",
        "children": [
          {
            "id": "extract-input",
            "blockType": "extractDataBlock",
            "action": "get",
            "schema": "#your-schema-id&1.0.0",
            "tag": "get_your_tool"
          },
          {
            "id": "calculate",
            "blockType": "customLogicBlock",
            "tag": "calc_your_tool",
            "expression": "function calc() { /* Your calculation code */ }"
          },
          {
            "id": "extract-output",
            "blockType": "extractDataBlock",
            "action": "set",
            "schema": "#your-schema-id&1.0.0",
            "tag": "set_your_tool"
          }
        ],
        "events": [
          {
            "target": "get_your_tool",
            "source": "Tool",
            "input": "RunEvent",
            "output": "input_event"
          }
        ]
      }
    }
    function calc_your_tool() {
        const documents = arguments[0] || [];
    
        return documents.map((document) => {
            const inputData = document.document.credentialSubject[0];
    
            // Your methodology calculations here
            const results = performCalculations(inputData);
    
            // Return modified document with results
            return Object.assign(inputData, results);
        });
    }
    
    function performCalculations(data) {
        // Implement your specific methodology equations
        // Follow patterns from AR Tool 14 for structure
    
        return {
            calculated_parameter_1: result1,
            calculated_parameter_2: result2,
            uncertainty_assessment: uncertaintyResults
        };
    }
    {
      "events": [
        {
          "target": "get_ar_tool_14",
          "source": "Tool",
          "input": "RunEvent",
          "output": "input_ar_tool_14"
        },
        {
          "target": "Tool",
          "source": "set_ar_tool_14",
          "input": "output_ar_tool_14",
          "output": "RunEvent"
        }
      ]
    }
    // Test framework for AR Tool 14 calculations
    function testARTool14StratifiedSampling() {
        const testInput = {
            scenario_type: "Project scenario",
            stratified_random_sampling_variables: [
                {
                    w_i: 0.3,
                    b_TREE_p_i: [45.2, 52.1, 38.7, 41.3],
                    stratum_area: 100
                },
                {
                    w_i: 0.7,
                    b_TREE_p_i: [62.4, 58.9, 67.2, 55.8],
                    stratum_area: 200
                }
            ],
            CF_TREE: 0.47,
            A: 300,
            t_VAL: 1.96
        };
    
        const result = calc_stratified_random_sampling(testInput, "Project scenario");
        const expectedResult = 156.7; // From test artifact
        const tolerance = 0.05; // 5% tolerance
    
        const difference = Math.abs(result - expectedResult) / expectedResult;
    
        return {
            passed: difference <= tolerance,
            calculated: result,
            expected: expectedResult,
            difference_percent: difference * 100
        };
    }
    {
      "id": "819d94e8-7d1d-43c1-a228-9b6fa1982e3f",
      "blockType": "dataTransformationAddon",
      "defaultActive": false,
      "permissions": ["Project_Proponent"],
      "onErrorAction": "no-action",
      "tag": "project-description",
      "expression": "(function calc() {\n  const jsons = [];\n  if (documents && documents.length > 0) {\n    documents.forEach((doc) => {\n      const document = doc.document;\n\n      const json = {\n        id: '',\n        projectNumber: null,\n        accountId: '',\n        standardTemplate: '',\n        standardTemplateName: '',\n        methodologyTemplateTitle: '',\n        methodologyTemplate: '',\n        projectName: '',\n        projectDescription: '',\n        website: null,\n        projectSubmissionStatus: 'Draft',\n        fetchProjectBoundaryFromCalculationInput: false,\n        estimatedProjectStartDate: '',\n        creditPeriod: {\n          startDate: '',\n          endDate: ''\n        },\n        projectSize: null,\n        averageAnnualVolume: null,\n        integratedModules: null,\n        integratedTools: null,\n        integratedMethodologies: null,\n        projectType: '14',\n        useManualCalculation: null,\n        locations: [],\n        projectProponents: [''],\n        projectProponentsWithDetails: null,\n        vcs: {\n          afoluActivities: [],\n          projectValidatorId: null,\n          additionalProjectTypes: [],\n          earlyAction: null\n        },\n        ccb: {\n          ccbStandard: null,\n          ccbStandardName: null,\n          projectTypeId: null,\n          distinctions: [],\n          auditorSiteVisitStartDate: null,\n          auditorSiteVisitEndDate: null,\n          ccbVerifierList: [],\n          projectValidatorId: null\n        },\n        sdVista: null,\n        plasticWRP: null,\n        registryDocumentUploadData: null,\n        calculationInputs: {\n          projectBoundaryProject: ['', '', '', '', '', '', '', '', '', ''],\n          projectBoundaryBaseline: ['', '', '', '', '', '', '', '', '', '']\n        },\n        otherJsonContents: {\n          cover: {\n            version: '',\n            projectId: '',\n            dateOfIssue: '',\n            projectTitle: '',\n            projectWebsite: '',\n            projectLifeTime: {\n              endDate: null,\n              startDate: null\n            },\n            standardVersion: '',\n            accountingPeriod: {\n              endDate: null,\n              startDate: null\n            },\n            expectedSchedule: '',\n            projectProponent: '',\n            verificationBody: '',\n            goldLevelCriteria: '',\n            recentDateOfIssue: '',\n            ccbStandardVersion: '',\n            documentPreparedBy: '',\n            historyOfCcbStatus: '',\n            multipleProjectLocation: null\n          }\n        }\n      };\n\n      jsons.push(json);\n    });\n  }\n  return jsons;\n})"
    }
    // Guardian to Verra API transformation pattern
    function transformProjectData(guardianDocument) {
      // Extract Guardian schema data
      const projectData = guardianDocument.credentialSubject[0];
    
      // Map to Verra API structure
      return {
        projectName: projectData.project_details.G5,
        projectDescription: projectData.project_details.project_description,
        estimatedProjectStartDate: projectData.crediting_period.start_date,
        projectType: '14', // VM0033 wetland restoration
        vcs: {
          afoluActivities: ['Wetland restoration'],
          projectValidatorId: null
        },
        locations: [{
          country: projectData.location.country,
          coordinates: projectData.location.coordinates
        }]
      };
    }
    {
      "blockType": "documentValidatorBlock",
      "tag": "validate_project_submission",
      "permissions": ["VVB"],
      "defaultActive": true,
      "onErrorAction": "no-action",
      "stopPropagation": true,
      "documentType": "VC Document",
      "checkSchema": true,
      "checkOwnDocument": true,
      "checkAssignDocument": false,
      "conditions": [
        {
          "type": "Equal",
          "field": "document.type",
          "value": "project"
        },
        {
          "type": "Not Equal",
          "field": "option.status",
          "value": "Rejected"
        },
        {
          "type": "In",
          "field": "document.credentialSubject.0.methodology",
          "value": ["VM0033", "VM0007", "VM0048"]
        }
      ]
    }
    {
      "blockType": "documentValidatorBlock",
      "tag": "validate_project_eligibility",
      "conditions": [
        {
          "type": "Equal",
          "field": "document.credentialSubject.0.project_type",
          "value": "wetland_restoration"
        },
        {
          "type": "In",
          "field": "document.credentialSubject.0.location.country",
          "value": ["USA", "Canada", "Mexico"]
        },
        {
          "type": "Not Equal",
          "field": "document.credentialSubject.0.start_date",
          "value": ""
        }
      ]
    }
    {
      "blockType": "documentValidatorBlock",
      "tag": "validate_vvb_assignment",
      "checkAssignDocument": true,
      "conditions": [
        {
          "type": "Equal",
          "field": "assignedTo",
          "value": "[current_user_did]"
        },
        {
          "type": "Equal",
          "field": "option.status",
          "value": "Assigned for Validation"
        }
      ]
    }
    {
      "blockType": "externalDataBlock",
      "tag": "kanop_mrv_data",
      "permissions": ["Project_Proponent"],
      "defaultActive": true,
      "entityType": "MRV",
      "schema": "#satellite-monitoring-schema"
    }
    {
      "blockType": "externalDataBlock",
      "tag": "iot_stove_data",
      "permissions": ["Project_Proponent"],
      "entityType": "StoveUsage",
      "schema": "#iot-monitoring-schema"
    }
    // Process IoT cookstove data for emission calculations
    function processStoveUsageData(iotData) {
      return {
        total_fuel_saved_kg: iotData.fuel_consumption.baseline - iotData.fuel_consumption.project,
        average_efficiency: iotData.efficiency_metrics.mean,
        usage_hours_per_day: iotData.burn_duration.daily_average,
        co2_emissions_reduced: calculateEmissionReductions(iotData.fuel_consumption),
        data_quality_score: iotData.quality_metrics.completeness
      };
    }
    {
      "blockType": "documentValidatorBlock",
      "tag": "validate_external_data",
      "conditions": [
        {
          "type": "Not Equal",
          "field": "satellite_data.confidence_level",
          "value": null
        },
        {
          "type": "In",
          "field": "satellite_data.confidence_level",
          "value": ["high", "medium"]
        },
        {
          "type": "Not Equal",
          "field": "satellite_data.measurement_date",
          "value": ""
        }
      ]
    }
    # Via API
    PUT /api/v1/policies/{policyId}/dry-run
    POST /api/v1/policies/{policyId}/dry-run/restart
    GET /api/v1/policies/{policyId}/dry-run/transactions?pageIndex=0&pageSize=100
    GET /api/v1/policies/{policyId}/dry-run/artifacts?pageIndex=0&pageSize=100
    GET /api/v1/policies/{policyId}/dry-run/ipfs?pageIndex=0&pageSize=100
    {
      "policyTests": [
        {
          "name": "Complete Project Lifecycle",
          "description": "Test full project workflow from submission to token issuance",
          "steps": [
            {
              "action": "submit_project",
              "user": "Project_Proponent",
              "data": "test_project_pdd.json",
              "expectedStatus": "Waiting to be Added"
            },
            {
              "action": "approve_project",
              "user": "OWNER",
              "expectedStatus": "Approved"
            },
            {
              "action": "assign_vvb",
              "user": "Project_Proponent",
              "data": {"vvb_did": "test_vvb_001"},
              "expectedStatus": "Assigned for Validation"
            },
            {
              "action": "validate_project",
              "user": "VVB",
              "expectedStatus": "Validated"
            }
          ],
          "expectedOutcome": {
            "tokens_minted": 1000,
            "documents_created": 4,
            "final_status": "Credited"
          }
        }
      ]
    }
    # Run all policy tests
    POST /api/v1/policies/{policyId}/tests/run
    
    # Run specific test
    POST /api/v1/policies/{policyId}/tests/{testId}/run
    
    # Get test results
    GET /api/v1/policies/{policyId}/tests/{testId}/results
    {
      "testId": "complete_lifecycle_test",
      "status": "SUCCESS",
      "duration": "45.2s",
      "steps": [
        {
          "step": "submit_project",
          "status": "PASSED",
          "actualStatus": "Waiting to be Added",
          "expectedStatus": "Waiting to be Added"
        },
        {
          "step": "approve_project",
          "status": "PASSED",
          "actualStatus": "Approved",
          "expectedStatus": "Approved"
        }
      ],
      "artifacts": {
        "documentsCreated": 4,
        "tokensMinted": 1000,
        "transactionsSimulated": 12
      }
    }
    {
      "testId": "validation_workflow_test",
      "status": "FAILURE",
      "failureReason": "Document validation failed",
      "failedStep": {
        "step": "validate_project",
        "expected": "Validated",
        "actual": "Validation Failed",
        "error": "Missing required field: baseline_methodology"
      },
      "documentComparison": {
        "expected": {
          "status": "Validated",
          "validation_report": "present"
        },
        "actual": {
          "status": "Validation Failed",
          "validation_errors": ["baseline_methodology is required"]
        }
      }
    }

    Chapter 2: Understanding VM0033 Methodology

    VM0033 "Methodology for Tidal Wetland and Seagrass Restoration" is a sophisticated 130-page framework designed specifically for blue carbon projects. Understanding this methodology is essential because it represents the technical complexity that modern digitization platforms must handle - comprehensive calculation requirements, multiple stakeholder roles, and intricate validation logic that must all be preserved when moving from manual to automated processes.

    Digitization Context: VM0033 demonstrates why methodology digitization is more than document conversion. The methodology's complexity requires sophisticated digital systems that can embed technical requirements within automated certification workflows while maintaining scientific rigor.

    VM0033 Scope and Applicability

    VM0033 addresses tidal wetland restoration across three interconnected ecosystem types, reflecting the scientific understanding that coastal restoration requires integrated approaches rather than isolated interventions. This systems-thinking approach creates complexity that demands sophisticated digital implementation.

    Ecosystem Coverage:

    • Tidal Forests: Mangroves and woody vegetation under tidal influence, representing some of the most carbon-dense ecosystems on Earth

    • Tidal Marshes: Emergent herbaceous vegetation in intertidal zones, providing critical habitat while storing substantial carbon in soils

    • Seagrass Meadows: Submerged aquatic vegetation in shallow coastal waters, supporting marine biodiversity while sequestering carbon in biomass and sediments

    Core Definition: "Re-establishing or improving the hydrology, salinity, water quality, sediment supply and/or vegetation in degraded or converted tidal wetlands."

    This definition emphasizes that restoration goes beyond simple replanting to address the fundamental processes that support healthy wetland function.

    Eligible Project Activities

    VM0033 recognizes that successful restoration requires addressing multiple stressors simultaneously rather than implementing single interventions. The methodology organizes eligible activities into four primary categories:

    Hydrological Management:

    • Remove tidal barriers (dikes, levees, undersized culverts)

    • Improve hydrological connectivity through enlarged culverts and new channels

    • Restore natural tidal flow to previously restricted wetlands

    • Implement phased approaches for gradual ecosystem adjustment

    Sediment Management:

    • Beneficial use of clean dredge material for elevation building

    • River sediment diversions to sediment-starved areas

    • Strategic sediment placement for vegetation support

    • Quality considerations for timing and environmental impact

    Water Quality Enhancement:

    • Nutrient load reduction (critical for seagrass restoration)

    • Improved water clarity through reduced residence time

    • Restored tidal and hydrologic flushing patterns

    • Coordination with upstream land management systems

    Vegetation Management:

    • Native plant community reestablishment (reseeding/replanting)

    • Invasive species removal and control

    • Improved management practices (reduced grazing pressure)

    • Address underlying stressors favoring invasive species

    Applicability Requirements and Exclusions

    VM0033 includes specific requirements to ensure projects deliver genuine emission reductions without causing negative impacts elsewhere. Project areas must be free of displaceable land uses, demonstrated through evidence of abandonment for two or more years, economic unprofitability, or legal prohibitions on alternative uses. This requirement prevents projects from simply displacing activities to other locations where they might cause emissions.

    Critical Exclusions: VM0033 projects cannot include commercial forestry, water table lowering (except specific conversions), organic soil burning, or nitrogen fertilizer application during the crediting period.

    The methodology excludes several activities that could undermine restoration objectives or create perverse incentives. Commercial forestry is prohibited in baseline activities to prevent projects from claiming credit for avoiding timber harvest that was never economically viable. Water table lowering is generally prohibited except for specific conversions from open water to tidal wetland. Organic soil burning and nitrogen fertilizer application are excluded due to their potential to increase greenhouse gas emissions and compromise ecosystem integrity.

    Project Boundaries and Temporal Considerations

    VM0033 establishes sophisticated temporal boundaries that account for the long-term nature of soil carbon dynamics in coastal systems. The methodology introduces two innovative concepts that address a fundamental challenge in wetland carbon accounting: how to claim credit for preserving carbon stocks that are finite and will eventually be depleted even under restoration scenarios.

    Key Innovation: VM0033's PDT and SDT concepts provide practical tools for addressing finite soil carbon stocks while maintaining scientific rigor in long-term carbon accounting.

    Temporal Boundary Concepts:

    Peat Depletion Time (PDT) - Organic Soils:

    • Definition: Time when all peat disappears or reaches no further oxidation level

    • Calculation Factors: Average organic soil depth above drainage limit, soil loss rate from subsidence/fire

    • Requirement: Conservative estimates remaining constant over time

    • Purpose: Ensures emission reduction claims don't exceed realistic preservation potential

    Soil Organic Carbon Depletion Time (SDT) - Mineral Soils:

    • Eroded Soils: Conservatively set at 5 years

    • Excavated/Drained Soils: Based on average organic carbon stock and oxidation loss rate

    • Purpose: Limits period for claiming emission reductions from restoration

    These temporal concepts reflect VM0033's practical approach to carbon accounting in dynamic coastal environments where complete permanence is unrealistic but significant climate benefits can still be achieved through restoration activities.

    Geographic Boundary Requirements:

    Mandatory Stratification Factors:

    • Organic vs. mineral soil areas

    • Seagrass meadows vs. other wetland types

    • Native ecosystems vs. degraded areas

    • Purpose: Ensure emission calculations reflect diverse project conditions

    Salinity Stratification (Unique VM0033 Feature):

    • Basis: Methane emissions vary significantly with salinity levels

    • Requirements: Stratify by salinity averages and low points during peak emission periods

    • Timing: Focus on growing seasons in temperate ecosystems

    • Result: Accurate methane accounting across salinity gradients

    Sea Level Rise Integration:

    • Assessment Required: Potential area loss due to sea level rise

    • Procedures: Estimate eroded strata areas over time

    • Purpose: Ensure emission reduction claims remain valid under changing climate

    Carbon Pools Included:

    • Aboveground biomass (trees, shrubs, herbaceous vegetation)

    • Belowground biomass (root systems)

    • Dead wood and litter

    • Soil organic carbon (most significant pool)

    Greenhouse Gas Sources:

    • CO₂: Emissions and removals from biomass and soil

    • CH₄: Emissions from soil and biomass (salinity-dependent)

    • N₂O: Emissions from soil and biomass

    • Flexibility: Conservative approaches allowed where direct measurement not feasible

    The comprehensive boundary approach recognizes that tidal wetland restoration involves complex, interconnected systems where changes in one component affect multiple others. Safeguards prevent double-counting and leakage that could undermine project integrity while ensuring that complex requirements can be translated into automated policy workflows for diverse coastal restoration contexts.

    Baseline Scenarios and Project Activities

    VM0033 recognizes that tidal wetland systems exist along a continuum from highly degraded to fully functional ecosystems. The baseline scenario represents what would occur without the restoration project, serving as the reference point for measuring emission reductions.

    Baseline Scenario Determination:

    Analysis Requirements:

    • Systematic analysis of historical trends, current conditions, likely future developments

    • Consider continued degradation, drainage, natural recovery potential, existing management practices

    • Account for regulatory frameworks and protected area designations

    Degraded Wetland Baselines:

    • Organic Soils: Continued oxidation releasing stored carbon as CO₂, subsidence from decomposition

    • Mineral Soils: Continued erosion and organic carbon loss, particularly from wave action/altered hydrology

    • Fire-Prone Areas: Organic soil combustion as additional emission source

    • Equilibrium Consideration: Carbon loss rates may decrease as readily available organic matter depletes

    Sea Level Rise Integration:

    • Migration Assessment: Evaluate wetland migration pathways and barriers (development, topographic constraints)

    • Barrier Impact: Areas unable to migrate inland face higher baseline emission rates from open water conversion

    • Dynamic Boundaries: Rising seas affect both baseline and project scenarios over time

    Project Activity Categories:

    Hydrological Restoration (Most Fundamental):

    • Barrier Removal: Remove dikes, levees, undersized culverts to restore natural tidal flow

    • Connectivity Improvement: Enlarge culverts, create channels, remove flow restrictions

    • Impoundment Restoration: Careful consideration of water levels, sediment loads, adjacent impacts

    • Phased Approach: Gradual ecosystem adjustment to avoid negative short-term impacts

    Sediment Management:

    • Beneficial Use: Place clean dredge material where elevation needed for vegetation support

    • River Diversions: Redirect sediment-laden water to areas with disrupted natural supply

    • Quality Considerations: Sediment quality, placement timing, vegetation/wildlife impact avoidance

    • Dual Benefits: Provides mineral sediments for elevation and freshwater for optimal salinity

    Salinity Management:

    • Freshwater Restoration: Improved stormwater management, groundwater enhancement, modified releases

    • Saltwater Exchange: Improved tidal connectivity, barrier removal

    • Ecosystem-Specific: Different tolerances for seagrass meadows, salt marshes, mangrove systems

    • Multi-Factor Approach: Address tidal connectivity, freshwater inputs, drainage patterns simultaneously

    Water Quality Improvement:

    • Nutrient Reduction: Decrease nitrogen/phosphorus inputs preventing eutrophication and algal blooms

    • Sediment Load Management: Reduce excessive inputs that smother communities or alter bathymetry

    • Natural Flushing: Restore exchange patterns reducing pollutant residence time

    • Coordination Required: Often involves upstream land management and stormwater systems

    Vegetation Management:

    • Native Reestablishment: Collect/propagate local genetic material, prepare seedbeds, optimal timing

    • Invasive Control: Address underlying stressors (altered hydrology, nutrient enrichment) favoring invasives

    • Grazing Management: Modify livestock access/timing while potentially maintaining traditional use

    • Adaptive Approach: Moderate grazing may be beneficial in historically grazed systems

    Key Principle: Successful restoration requires addressing multiple stressors simultaneously with adaptive management approaches that maintain rigorous emission reduction standards.

    Stakeholder Ecosystem and Roles

    VM0033 projects operate within a complex network of stakeholders, each bringing distinct expertise, responsibilities, and interests to coastal restoration initiatives. The methodology's success depends on effective coordination among these diverse participants, from technical specialists to local communities to financial institutions. Understanding this stakeholder ecosystem is crucial for project implementation and for designing digital platforms that can accommodate varied needs and capabilities.

    Guardian Integration: The platform's roles and permissions system accommodates VM0033's diverse stakeholder types, from project proponents to VVBs, each with different access needs and responsibilities.

    Key Stakeholder Types:

    Project Proponents (Primary Drivers):

    • Entity Types: Government agencies, non-profits, private companies, collaborative partnerships

    • Required Expertise: Wetland ecology, restoration techniques, carbon accounting, regulatory compliance

    • Core Responsibilities: Site selection, restoration planning, stakeholder coordination, implementation oversight, monitoring execution, verification support

    • Success Factors

    Validation and Verification Bodies (VVBs):

    • Role: Independent assessment of project compliance with VM0033 requirements

    • Required Expertise: Carbon accounting + wetland ecology + sophisticated ecological processes

    • Activities: Initial validation (design compliance) + ongoing verification (implementation results)

    • Evaluation Areas: Baseline scenarios, project activities, monitoring data quality, emission calculations

    Technical Expert Teams (Complementary Skills Required):

    • Ecological Experts: Wetland ecosystem function, species requirements, restoration techniques, monitoring

    • Hydrological Experts: Water flow patterns, tidal dynamics, sediment transport, hydrology-ecosystem interactions

    • Soil Scientists: Carbon stock assessment, soil classification, biogeochemical processes

    • Carbon Accounting Specialists: VCS compliance, methodology requirements, ecological-market bridge

    Regulatory Agencies (Multi-Level):

    • Local: Environmental permits, land use approvals, local environmental regulations

    • State/Provincial: Wetland protection, water quality, coastal zone management

    • National: Carbon market participation, international climate commitments

    • Complexity: Multiple jurisdictions, federal/state/local overlap, carbon market compliance

    Local Communities (Project Success Determinants):

    • Types: Indigenous peoples, fishing communities, agricultural communities, coastal residents

    • Engagement Requirements: Understanding local values, concerns, traditional knowledge systems

    • Benefits Beyond Carbon: Flood protection, enhanced fisheries habitat, recreational opportunities

    • Success Factor: Traditional ecological knowledge incorporation

    Landowners and Land Managers:

    • Types: Private landowners, government agencies, non-profits, community groups

    • Critical Role: Site access control, direct implementation participation

    • Relationship Variations: Direct ownership vs. long-term agreements

    • Requirement: Long-term land tenure security demonstration

    Financial Stakeholders:

    • Types: Carbon credit buyers, project investors, grant providers, financial institutions

    • Requirements: Financial transparency, risk management, return on investment

    • Buyer Categories: Corporations (offsets), governments (climate commitments), traders/intermediaries

    • Varying Needs: Credit quality standards, verification requirements, co-benefit preferences

    The interconnected nature of these stakeholder relationships requires coordination mechanisms that can accommodate diverse interests while maintaining focus on restoration objectives and carbon market requirements. Digital platforms must support these complex relationships through appropriate access controls, communication tools, and workflow management capabilities.

    Emission Sources and Carbon Pools

    VM0033 addresses the complex biogeochemical processes occurring in tidal wetland systems through comprehensive accounting of multiple greenhouse gas sources and carbon pools. Understanding these sources and pools is essential for accurate emission reduction quantification and for designing monitoring programs that capture all significant changes in greenhouse gas fluxes.

    Carbon Pools Overview

    Primary Pools:

    • Soil organic carbon (most significant)

    • Aboveground biomass (trees, shrubs, herbaceous)

    Carbon Storage in Wetland Systems

    Soil organic carbon represents the most significant carbon pool in most wetland systems, with the potential to accumulate enormous quantities over centuries or millennia under anaerobic conditions. This carbon exists in forms ranging from recently deposited plant material to highly decomposed organic matter that can persist for thousands of years. The methodology distinguishes between autochthonous carbon derived from internal vegetation and allochthonous carbon from upstream, tidal, or atmospheric sources. Projects can only claim credit for carbon that wouldn't accumulate under baseline conditions, preventing overestimation of restoration benefits.

    Biomass carbon pools encompass both aboveground components including trees, shrubs, and herbaceous vegetation, and belowground root systems. Wetland systems can achieve remarkable productivity under appropriate conditions, ranking among Earth's most productive ecosystems. However, biomass carbon stocks are highly variable based on species composition, age structure, and environmental conditions. Quantification requires specialized procedures adapted for wetland systems, particularly for herbaceous vegetation with significant seasonal variability.

    Dead wood and litter can represent substantial carbon pools in forested wetland systems. Under anaerobic conditions, these materials accumulate carbon rather than decomposing rapidly. However, they become emission sources when exposed to aerobic conditions through drainage or other disturbances, requiring careful consideration in project design and monitoring.

    Greenhouse Gas Dynamics

    Carbon dioxide (CO₂) represents the most significant greenhouse gas flux in most wetland restoration projects. Emissions occur primarily through soil organic carbon oxidation when anaerobic soils are exposed to oxygen through drainage or excavation activities. These emissions can continue for years or decades depending on soil carbon content and environmental conditions. Removals occur through photosynthesis and subsequent carbon storage in biomass and soil pools, with plant material decomposing under anaerobic conditions to form stable organic matter.

    Methane (CH₄) presents a unique challenge in wetland carbon accounting due to its natural production through anaerobic decomposition processes. Methane emissions vary significantly based on salinity, temperature, vegetation type, and organic matter availability. The salinity effect is particularly important: freshwater systems typically produce more methane than saltwater systems because sulfate in seawater inhibits methanogenic bacteria. VM0033 addresses this variability through stratification by salinity conditions and provides default emission factors when site-specific data are unavailable.

    Nitrous oxide (N₂O) emissions occur primarily at the interface between aerobic and anaerobic zones where nitrification and denitrification processes take place. While typically smaller in magnitude than CO₂ or methane fluxes, N₂O emissions are significant due to the gas's high global warming potential. The methodology allows conservative approaches that avoid overestimation while capturing significant sources, with options for direct monitoring or use of conservative default values.

    The comprehensive approach to greenhouse gas accounting ensures that VM0033 projects deliver net climate benefits by accounting for all significant emission sources and removals. This thorough accounting builds confidence in the methodology's environmental integrity while providing practical guidance for project implementation across diverse coastal restoration contexts.

    Monitoring Requirements and Verification Processes

    VM0033's monitoring requirements address the complexity of tracking carbon dynamics across multiple pools and greenhouse gas sources in dynamic coastal environments. The monitoring program must capture measurable changes while accounting for natural variability and measurement uncertainty inherent in wetland systems.

    Monitoring Program Objectives

    Wetland restoration projects operate over multi-decade timeframes, requiring monitoring systems that can demonstrate carbon performance throughout extended crediting periods. The monitoring program serves three primary functions:

    • Performance Verification: Quantifying actual carbon sequestration and emission reductions against projected baselines

    • Adaptive Management: Identifying restoration challenges early to enable corrective actions

    • Compliance Documentation: Providing verifiable evidence of methodology adherence for carbon market participation

    Core Monitoring Components

    Soil Carbon Monitoring

    Soil carbon represents the largest carbon pool in most wetland systems but presents significant measurement challenges due to high spatial variability and slow rates of change. VM0033 requires establishment of permanent monitoring plots with precise geospatial coordinates to enable repeated measurements over time.

    The methodology specifies stratified sampling approaches based on ecosystem type, soil characteristics, and restoration activities. Each stratum requires sufficient sample plots to achieve statistical significance when scaling plot-level measurements to project-level estimates. Soil sampling protocols address sampling depth, timing, and laboratory analysis procedures to ensure consistency and accuracy.

    Soil carbon changes occur gradually, requiring monitoring programs with sufficient statistical power to detect meaningful changes above background variability. The methodology provides guidance on sampling intensity and frequency based on expected rates of change and required precision levels.

    Biomass Carbon Monitoring

    Tree and shrub monitoring follows established forestry protocols adapted for wetland conditions. Standard diameter and height measurements combine with species-specific allometric equations to estimate biomass and carbon content. The methodology incorporates procedures from CDM AR-Tool14 for woody biomass quantification.

    Herbaceous vegetation monitoring requires different approaches due to seasonal variability and diverse growth forms. Monitoring protocols must account for seasonal patterns, species composition changes, and disturbance effects while providing reliable estimates of carbon stock changes.

    Hydrological Monitoring

    Hydrological conditions directly influence ecosystem function and carbon dynamics. Continuous monitoring of water levels documents changes in hydroperiod and water depth that affect both ecosystem restoration success and carbon sequestration rates.

    Salinity monitoring tracks water chemistry changes that influence species composition and biogeochemical processes, particularly methane emissions. The methodology requires stratification by salinity conditions due to significant effects on greenhouse gas production rates.

    Vegetation Community Monitoring

    Vegetation monitoring documents changes in species composition, cover, and structural characteristics resulting from restoration activities. This monitoring validates restoration success, documents habitat improvements, and supports carbon stock change calculations.

    Monitoring protocols must be appropriate for target ecosystem types and restoration objectives, incorporating quantitative sampling methods, qualitative condition assessments, and photographic documentation of temporal changes.

    Verification Process Requirements

    Independent verification provides objective assessment of project implementation and carbon performance. Verification bodies (VVBs) must possess expertise in both carbon accounting methodologies and wetland ecology to adequately evaluate project compliance.

    Verification Scope and Activities

    The verification process encompasses multiple assessment components:

    • Field Verification: On-site assessment of restoration implementation, monitoring equipment, and ecosystem conditions

    • Data Validation: Review of monitoring data quality, calculation procedures, and quality assurance measures

    • Methodology Compliance: Evaluation of project adherence to VM0033 requirements and procedures

    • Stakeholder Consultation: Interviews with project personnel, local communities, and relevant stakeholders

    Verification Timeline

    Initial validation occurs before credit issuance, confirming project design compliance with VM0033 requirements. Periodic verification throughout the crediting period validates ongoing performance and continued methodology compliance.

    Quality Assurance Framework

    VM0033 requires comprehensive quality assurance measures throughout the monitoring program:

    Equipment Calibration: All monitoring equipment requires regular calibration and maintenance according to manufacturer specifications. GPS units, water level sensors, and laboratory equipment need documented calibration schedules.

    Data Management Systems: Monitoring data must be stored in secure systems with backup procedures and clear chain of custody documentation. Data management systems must ensure long-term preservation while enabling independent verification access.

    Personnel Training: Monitoring staff require training in standardized procedures to ensure consistency across time periods and personnel changes. Training documentation and competency verification are required.

    Documentation Standards: All monitoring activities require detailed documentation including protocols, equipment specifications, environmental conditions, and quality control measures.

    Implementation Challenges and Solutions

    Site Access Limitations: Coastal wetland sites may be inaccessible during certain seasons or weather conditions. Monitoring programs require contingency plans and flexible scheduling to maintain data continuity.

    Equipment Durability: Saltwater environments and extreme weather conditions can compromise monitoring equipment. Projects need maintenance schedules, backup equipment, and weather-resistant installations.

    Natural System Variability: Wetland systems exhibit natural variation across multiple temporal scales. Monitoring programs must distinguish between natural fluctuations and restoration-induced changes through appropriate statistical approaches and baseline data collection.

    Long-term Program Consistency: Multi-decade projects face inevitable personnel turnover. Detailed standard operating procedures, training programs, and institutional knowledge management systems help maintain monitoring consistency.

    Guardian Integration: The platform supports monitoring through mobile data collection applications, external dMRV platform integrations, automated quality validation, and integrated verification workflows connecting project teams with verification bodies.

    The comprehensive monitoring and verification requirements ensure that VM0033 projects deliver measurable, verifiable carbon benefits while maintaining the scientific rigor necessary for carbon market credibility. These requirements, while demanding, provide the foundation for scaling coastal ecosystem restoration through market-based mechanisms.

    Methodology Relationships and Integration

    VM0033 operates within an interconnected framework of environmental methodologies and standardized tools. The methodology builds upon established procedures while introducing innovative approaches specific to tidal wetland restoration. Understanding these relationships is essential for effective implementation and recognizing opportunities for cross-methodology integration.

    CDM Tool Integration: VM0033 incorporates multiple CDM tools (AR-Tool02, AR-Tool03, AR-Tool14, AR-Tool05) that are available as in Guardian's methodology library.

    Foundation on CDM Tools

    VM0033 leverages several Clean Development Mechanism (CDM) tools that provide standardized approaches for common carbon accounting challenges:

    AR-Tool02 - Additionality Assessment: This combined tool provides the framework for VM0033's additionality demonstration, ensuring consistency with established approaches for proving that projects would not occur without carbon market incentives. The tool's structured approach helps project developers navigate complex additionality requirements while maintaining credibility with verification bodies.

    AR-Tool03 - Statistical Sampling: This tool informs VM0033's approach to determining appropriate sample sizes for biomass and carbon stock measurements. It ensures monitoring programs achieve sufficient statistical power to detect meaningful changes while avoiding unnecessarily intensive sampling that could compromise project economics.

    AR-Tool14 - Woody Biomass Quantification: VM0033 directly incorporates procedures from this tool for estimating carbon stocks and changes in trees and shrubs. This integration ensures consistency with established forestry carbon accounting while adapting to the unique challenges of wetland environments.

    AR-Tool05 - Fossil Fuel Emissions: The methodology uses this tool to account for emissions from project implementation activities including equipment operation, transportation, and prescribed burning. This ensures comprehensive accounting of all significant emission sources in net benefit calculations.

    VCS Methodology Relationships

    VM0033's development built upon lessons from related VCS methodologies, particularly those addressing coastal and wetland ecosystems:

    VM0024 - Coastal Wetland Creation: This earlier methodology provided important precedents for coastal ecosystem carbon dynamics, though VM0033 significantly expands the scope to include restoration activities and addresses a broader range of ecosystem types.

    Cross-Methodology Learning: VM0033's approaches to addressing sea level rise, stakeholder engagement complexity, and ecosystem service integration provide models that inform development of other environmental methodologies.

    VCS Module Integration

    The methodology incorporates several VCS modules that provide standardized approaches for common implementation challenges:

    VMD0005 - Wood Products: This module enables VM0033 projects to account for carbon storage in harvested wood products, recognizing that coastal forests may require strategic harvesting before tree mortality due to sea level rise impacts.

    VMD0016 - Area Stratification: This module provides guidance for dividing project areas into homogeneous units for monitoring and accounting. It's particularly important for VM0033 given the high spatial variability in coastal ecosystems and requirements for stratification based on ecosystem type, soil characteristics, and restoration activities.

    VMD0019 - Future Projections: This module supports VM0033's baseline scenario development, particularly for incorporating sea level rise impacts and long-term ecosystem trajectories. It provides standardized approaches for integrating climate change projections into baseline development.

    VMD0052 - Wetland Additionality: Developed specifically to support VM0033 implementation, this module provides detailed guidance for demonstrating additionality in wetland restoration contexts where multiple benefits beyond carbon sequestration may motivate project development.

    Scientific Literature Integration

    VM0033 incorporates extensive scientific literature to inform default values, calculation procedures, and monitoring approaches. The methodology references peer-reviewed studies to ensure carbon accounting reflects current scientific understanding of wetland carbon dynamics.

    The approach balances scientific rigor with practical implementation requirements. Default values and procedures are based on comprehensive literature reviews but designed to be conservative and applicable across diverse geographic and ecological contexts.

    Regulatory Framework Coordination

    VM0033's relationship with regulatory frameworks varies by jurisdiction but often involves coordination with existing wetland protection and restoration programs. Many jurisdictions have established wetland conservation policies that may complement or conflict with carbon market objectives.

    The methodology anticipates integration with existing environmental monitoring and reporting systems, recognizing that many restoration projects occur within broader environmental management programs. This integration can reduce monitoring costs and improve data quality while ensuring compliance with multiple regulatory requirements.

    International Framework Alignment

    VM0033 aligns with several international environmental frameworks:

    Ramsar Convention on Wetlands: The methodology supports wetland conservation objectives while providing economic incentives for restoration.

    Convention on Biological Diversity: VM0033 projects often deliver biodiversity co-benefits that support national biodiversity strategies.

    UNFCCC: The methodology contributes to national climate commitments while providing practical implementation tools at project scales.

    Innovation and Contribution

    VM0033 contributes several innovations to the broader methodology landscape:

    Temporal Boundary Concepts: The Peat Depletion Time (PDT) and Soil Organic Carbon Depletion Time (SDT) concepts provide practical approaches to addressing long-term carbon dynamics that may be applicable to other ecosystem types.

    Sea Level Rise Integration: The methodology's systematic approach to incorporating climate change impacts provides a model for other methodologies addressing climate-vulnerable ecosystems.

    Comprehensive GHG Accounting: VM0033's integration of multiple greenhouse gases and carbon pools provides a model for comprehensive carbon accounting that addresses the full range of climate impacts from ecosystem management.

    Guardian Platform Integration

    Understanding VM0033's methodology relationships provides essential context for Guardian platform implementation. The platform's modular architecture enables reuse of common tools and procedures across multiple methodologies while maintaining specific requirements for each methodology.

    Cross-methodology references and shared calculation procedures must be reflected in policy workflows that can accommodate the interconnected nature of environmental methodologies. This integration capability is crucial for scaling environmental asset tokenization across diverse project types and geographic contexts.

    The methodology's sophisticated integration requirements demonstrate both the challenges and opportunities in environmental asset digitization, where complex ecological and regulatory systems must be translated into automated workflows that maintain scientific rigor while enabling efficient implementation and verification.

    Preparing for Guardian Implementation

    With this deep understanding of VM0033's requirements, stakeholders, and processes, you're now prepared to explore how Guardian's technical architecture can accommodate this methodology's complexity. The platform's Policy Workflow Engine must handle VM0033's sophisticated temporal boundaries, multi-stakeholder processes, and comprehensive monitoring requirements.

    Key implementation considerations include:

    Workflow Complexity: VM0033's multiple project activity types and stakeholder roles require flexible workflow designs that can accommodate diverse restoration approaches while maintaining consistent carbon accounting standards.

    Data Management: The methodology's extensive monitoring requirements necessitate robust data collection, validation, and storage systems that can handle long-term datasets with high spatial and temporal resolution.

    Calculation Engines: VM0033's sophisticated carbon accounting procedures, including PDT and SDT calculations, require automated calculation engines that can handle complex biogeochemical models while maintaining transparency and auditability.

    Integration Capabilities: The methodology's relationships with CDM tools and other VCS methodologies require platform capabilities for cross-methodology integration and shared calculation procedures.


    Related Resources

    • - Complete parsed methodology document

    • - Working test scenarios with real project data

    • - Real Allcot project calculations

    • - Complete validation tools and reference materials

    Key Concepts Covered

    • VM0033 scope and applicability conditions

    • Baseline scenarios and project activities

    • Complex stakeholder ecosystem requirements

    • Carbon pools and emission sources

    Domain Knowledge Complete: You now understand VM0033's complexity and requirements. Chapter 3 will show how Guardian's architecture handles this complexity through automated workflows.


    All the content in this chapter - including technical details, calculation procedures, and requirements referenced are derived from the actual VM0033 methodology document to ensure accuracy and completeness.

    Special Feature: Long-term carbon storage in wood products (trees harvested before sea level rise dieback)
    : Technical expertise + project management + local ecological/social understanding

    Belowground biomass (root systems)

  • Dead wood and litter

  • Key Distinctions:

    • Autochthonous vs. allochthonous soil carbon

    • Organic vs. mineral soil systems

    • Fresh vs. saltwater methane emissions

    Verra VCS Program - Methodology standards

  • VM0033 Methodology on Verra

  • Guardian Roles & Permissions - Stakeholder management

  • Monitoring and verification procedures
    reusable modules
    VM0033 Parsed Documentation
    VM0033 Test Case Artifact
    ER Calculations Example
    Artifacts Collection

    Chapter 14: Guardian Workflow Blocks and Configuration

    Step-by-step configuration of Guardian's workflow blocks for complete methodology automation

    Chapter 13 introduced Guardian's block-event architecture. Chapter 14 gets hands-on, showing you how to configure each workflow block type using real examples from VM0033's production policy.

    Guardian provides over 25 workflow blocks, each serving specific purposes in methodology automation. Rather than memorizing every block parameter, this chapter teaches you configuration patterns that apply across different block types.

    Configuration Fundamentals

    Block Configuration Methods

    Guardian offers three ways to configure workflow blocks:

    1. Properties Tab: Visual interface for common settings

    2. Events Tab: Graphical event connection management

    3. JSON Tab: Direct JSON manipulation for advanced configurations

    Block Structure Basics

    Every Guardian workflow blocks follow similar JSON structure:

    Key Configuration Elements:

    • id: Unique identifier (Guardian auto-generates)

    • blockType: Defines block functionality

    • tag: Human-readable name for referencing in events

    • permissions: Which roles can access this block

    Permission Patterns

    Guardian uses role-based permissions consistently across blocks:

    • ["OWNER"]: Standard Registry only

    • ["Project_Proponent"]: Project Developers only

    • ["VVB"]: Validation/Verification Bodies only

    Data Input and Management Blocks

    These blocks handle document collection, storage, and display.

    requestVcDocumentBlock: Schema-Based Forms

    Transforms your Part III schemas into interactive forms. VM0033 uses this for PDD and monitoring report submission.

    Basic Configuration:

    VM0033 Example - PDD Submission: The VM0033 policy uses new_project block to collect Project Design Documents. The schemaId references the PDD schema, automatically generating form fields for:

    • Project details and location

    • Baseline emission calculations

    • Project emission parameters

    • Monitoring plan specifications

    Your Implementation: Replace VM0033's schema ID with your Part III PDD schema UUID. Guardian automatically creates form fields based on your schema structure.

    sendToGuardianBlock: Document Storage

    Stores submitted documents to database or Hedera blockchain with status tracking.

    Database Storage Configuration:

    Hedera Blockchain Storage Configuration:

    Storage Options Comparison:

    Feature
    Database
    Hedera
    Usage

    Status Management: The options array sets document status values that other blocks can filter on. For example:

    • "Waiting to be Added": New submissions awaiting registry review

    • "Waiting for Validation": Projects ready for VVB assignment

    • "Validated": Successfully validated projects

    Key Configuration Fields:

    • entityType: Groups related documents (e.g., "project", "report")

    • topic: Hedera topic for blockchain storage organization. Helpful with Guardian indexer querying.

    • documentType: "vc" for Verifiable Credentials, "vp" for Verifiable Presentations

    interfaceDocumentsSourceBlock: Document Grids

    Displays document collections with filtering, search, and action buttons. Works with child documentsSourceAddon blocks to provide data.

    VM0033 Real Configuration - Project Pipeline:

    Key Configuration Properties:

    • uiMetaData.fields: Array defining grid columns and their properties

    • dataType: Handled by child documentsSourceAddon blocks

    • bindBlock: References another block (buttonBlock) to embed in the column

    • bindGroup

    Field Type Details:

    Text Fields:

    Button Fields:

    Block Fields (for embedded buttons):

    Required Child Blocks: interfaceDocumentsSourceBlock must have child documentsSourceAddon blocks that provide the actual data. The bindGroup property links specific columns to specific data sources.

    Logic and Calculation Blocks

    These blocks process data, validate inputs, and execute methodology calculations.

    customLogicBlock: Calculation Engine

    Executes JavaScript or Python for emission reduction calculations using schema field data.

    VM0033 Real Configuration:

    Key Configuration Properties:

    • expression: JavaScript or Python code as a string

    • permissions: Which roles can trigger the calculation

    • defaultActive: Whether the block executes automatically

    • onErrorAction: How to handle calculation errors

    VM0033 JavaScript Example:

    Your Implementation: Use your Part III schema field names as JavaScript variables. The calculation result creates new document fields accessible by other blocks.

    documentValidatorBlock: Data Validation

    Validates documents against methodology rules beyond basic schema validation.

    Configuration Pattern:

    Validation Rules:

    • Field value comparisons (>=, <=, ==, !=)

    • Cross-field validation (one field depends on another)

    • Date range checking for monitoring periods

    switchBlock: Conditional Branching

    Creates different workflow paths based on data values or user decisions.

    Configuration Pattern:

    VM0033 Usage: VVB validation decisions create different paths:

    • Approved: Project proceeds to monitoring phase

    • Rejected: Project returns to developer for revision

    • Conditional Approval: Project requires minor corrections

    Token and Asset Management Blocks

    These blocks handle carbon credit lifecycle from calculation to retirement.

    mintDocumentBlock: Token Issuance

    Issues VCU tokens based on verified emission reduction calculations.

    VM0033 Real Configuration:

    Key Configuration Properties:

    • rule: JSON path to calculated emission reduction value (without "document.credentialSubject.0." prefix)

    • tokenId: UUID of the token template defined in policy configuration

    • accountType:

      • "default"

    Token Template Reference: The tokenId must match a token defined in the policy's policyTokens array:

    VM0033 Integration: VM0033 uses automatic_report customLogicBlock to calculate emission reductions, which outputs the net_GHG_emissions_reductions_and_removals.NERRWE field that the mint block references.

    tokenActionBlock: Token Operations

    Handles token transfers, retirements, and account management.

    Configuration Pattern:

    Available Actions:

    • "transfer": Move tokens between accounts

    • "freeze": Temporarily lock tokens

    • "unfreeze": Unlock frozen tokens

    retirementDocumentBlock: Permanent Token Removal

    Permanently removes tokens from circulation with retirement certificates.

    Configuration Pattern:

    Container and Navigation Blocks

    These blocks organize user interfaces and manage workflow progression.

    interfaceContainerBlock: Layout Organization

    Creates tabs, or a simple basic vertical layout for organizing workflow interfaces.

    Tab Container Pattern:

    policyRolesBlock: Role Assignment

    Manages user role selection and assignment within policies.

    Configuration Pattern:

    buttonBlock: Custom Actions

    Creates buttons for state transitions and custom workflow actions. Used for approve/reject decisions with optional dialogs.

    VM0033 Real Configuration - Approve/Reject Buttons:

    Button Types:

    • selector: Simple button that sets a field value

    • selector-dialog: Button with confirmation dialog for additional input

    Button Configuration Properties:

    • tag: Button identifier for event configuration (Button_0, Button_1, etc.)

    • field: Document field to modify (typically "option.status")

    • value: Value to set when button is clicked

    • uiClass: CSS class for styling (btn-approve, btn-reject, etc.)

    VM0033 Event Integration:

    Each button output (Button_0, Button_1) can trigger different target blocks, allowing different workflows based on which button is clicked.

    Event Configuration Patterns

    Events connect blocks together, creating automated workflows. Guardian provides both graphical and JSON-based event configuration.

    Visual Event Configuration

    The Events tab provides an intuitive interface for connecting blocks:

    Event Configuration Fields:

    • Event Type: Output Event (triggers when block completes)

    • Source: Current Block (the triggering block)

    • Output Event: RunEvent (completion trigger)

    • Target: Next Block (destination block)

    Basic Event Structure

    Common Event Patterns

    Document Submission Flow:

    UI Refresh After Save:

    Advanced Block Configuration

    Dynamic Filtering with filtersAddon

    Creates dynamic document filters based on status, date, or custom criteria.

    VM0033 Real Configuration:

    Key Configuration Properties:

    • type: Filter UI type - "dropdown" for select options, "text" for input fields

    • queryType: Filter logic - "equal", "not_equal", "contains", etc.

    Document Data Source with documentsSourceAddon

    Provides filtered document collections to interfaceDocumentsSourceBlock parent containers.

    VM0033 Real Configuration:

    Key Configuration Properties:

    • dataType: Document type - "vc-documents" for Verifiable Credentials, "vp-documents" for Verifiable Presentations

    • schema: Schema UUID to filter documents by

    • filters: Array of filter conditions to apply to document collection

    Filter Options:

    • type: "equal", "not_equal", "contains", "not_contains", "in", "not_in"

    • field: Document field path (e.g., "option.status", "document.credentialSubject.0.field1"

    Block Configuration Best Practices

    Naming Conventions

    Use unique, descriptive, consistent tag names:

    • new_project for PDD submission blocks

    • save_project for document storage blocks

    • project_grid_[role] for role-specific grids

    Permission Design

    Design permissions for least privilege:

    • Document submission: Role-specific (["Project_Proponent"])

    • Document review: Authority roles (["OWNER", "VVB"])

    • Administrative functions: Registry only (["OWNER"])

    Error Handling

    Include validation and error handling blocks:

    • Pre-validation before expensive operations

    • Clear error messages for user guidance

    • Fallback paths for edge cases

    Performance Optimization

    Optimize for user experience:

    • Use onlyOwnDocuments: true for large document sets

    • Implement pagination for document grids

    • Cache calculation results where appropriate

    Testing Your Block Configuration

    Configuration Validation

    Test block configurations incrementally using Guardian's policy editor:

    1. Individual Block Testing: Configure each block using Properties tab, verify JSON structure

    2. Event Chain Testing: Use Events tab to connect blocks, test trigger flows

    3. Role Permission Testing: Switch user roles to verify permission restrictions

    4. Data Flow Testing: Submit test data through complete workflows using policy dry runs

    Guardian UI Testing Tips:

    • Properties Tab: Quick validation of basic settings and permissions

    • JSON Tab: Verify complex configurations and nested structures

    • Events Tab: Visual verification of workflow connections and event flows

    • Policy Preview: Test complete workflows before publishing

    Common Configuration Issues

    Schema Reference Errors:

    • Verify schema UUIDs match your Part III schemas

    • Check field path references in grids and calculations

    Permission Problems:

    • Ensure users have appropriate roles assigned

    • Check onlyOwnDocuments settings for document visibility

    Event Connection Issues:

    • Verify source and target block tags match exactly

    • Check event input/output types are compatible

    Integration with Part III Schemas

    Schema Field Mapping

    Your Part III schemas become form fields and calculation variables:

    PDD Schema → Form Fields:

    Monitoring Schema → Calculation Variables:

    Validation Rule Integration

    Schema validation rules automatically apply to requestVcDocumentBlock forms:

    • Required fields become mandatory

    • Number ranges enforce min/max values

    • Pattern validation ensures data format consistency

    • Enum values create dropdown selections

    Next Steps and Chapter 15 Preview

    Chapter 14 covered Guardian's workflow blocks and configuration patterns. You now understand how to:

    • Configure data input blocks with your Part III schemas

    • Set up calculation blocks for emission reduction formulas

    • Create token management workflows for VCU issuance

    • Design user interfaces with container and navigation blocks

    Chapter 15 Deep Dive: Now that you understand individual blocks, Chapter 15 analyzes VM0033's complete policy implementation, showing how these blocks work together in a production methodology. You'll trace the complete workflow from PDD submission to VCU token issuance, understanding real-world policy patterns.


    Prerequisites Check: Ensure you have:

    Time Investment: ~30 minutes reading + ~90 minutes hands-on practice with block configuration

    Practical Exercises:

    1. Visual Configuration Practice: Use Guardian's Properties tab to configure a requestVcDocumentBlock with your Part III PDD schema

    2. Event Connection Practice: Use the Events tab to connect form submission to document storage blocks

    3. JSON Configuration Practice: Manually configure sendToGuardianBlock for both database and Hedera storage

    4. Complete Workflow Practice: Create a simple project submission workflow using multiple block types and test with Guardian's policy preview

    uiMetaData: Display settings and user interface configuration

  • children: Nested blocks for containers

  • events: Event triggers connecting to other blocks

  • ["OWNER", "Project_Proponent"]: Multiple roles
  • ["ANY_ROLE"]: All authenticated users

  • ["NO_ROLE"]: Unauthenticated users (role selection)

  • Transparency

    Private

    Public

    Hedera for verification

    "Minting": Approved for token issuance
    : Links to specific
    documentsSourceAddon
    child blocks for conditional display

    Numeric range validation for emission factors

    : Assigns tokens to policy owner (Standard Registry)
  • "user": Assigns tokens to document submitter (Project Developer)

  • "grantKyc": Grant know-your-customer status
  • "revokeKyc": Revoke KYC status

  • filters: Array of conditions that control button visibility

    Input Event: RunEvent (what the target block receives)

  • Event Actor: Event Initiator (who can trigger this event)

  • field: Document field to filter on
  • optionName: Field path for dropdown option labels

  • optionValue: Field path for dropdown option values

  • canBeEmpty: Whether filter allows empty/no selection

  • onlyOwnDocuments: Boolean - whether to show only user's own documents

  • defaultActive: Boolean - whether this addon is active by default

  • )
  • value: Value or comma-separated values to filter by

  • calculate_[type] for calculation blocks

    Speed

    Fast

    Slower

    Database for drafts, Hedera for finals

    Cost

    Free

    HBAR fees

    Database for frequent updates

    Immutability

    Mutable

    Immutable

    Guardian Block Configuration - Properties Tab
    Guardian Block Configuration - Events Tab
    Guardian Block Configuration - JSON Tab
    Guardian sendToGuardianBlock Configuration
    Guardian Events Tab Configuration

    Hedera for audit trails

    {
      "id": "#unique-uuid",
      "blockType": "requestVcDocumentBlock",
      "tag": "unique-semantic-name",
      "permissions": ["Project_Proponent"],
      "uiMetaData": {
        "title": "Submit PDD",
        "description": "Project Design Document submission"
      },
      "children": [],
      "events": []
    }
    {
      "blockType": "requestVcDocumentBlock",
      "tag": "new_project",
      "permissions": ["Project_Proponent"],
      "schemaId": "#9122bbd0-d96e-40b1-92f6-7bf60b68137c",
      "uiMetaData": {
        "title": "New Project",
        "description": "Submit Project Design Document",
        "type": "dialog"
      }
    }
    {
      "id": "0c6dabc8-43aa-424e-bd80-972302ebdc18",
      "blockType": "sendToGuardianBlock",
      "tag": "save_project_auto",
      "permissions": ["Project_Proponent"],
      "dataSource": "database",
      "documentType": "vc",
      "entityType": "project",
      "options": [
        {
          "name": "status",
          "value": "Waiting to be Added"
        }
      ]
    }
    {
      "id": "8b45d09b-03a2-4f9f-9162-6ebb2f3878a9",
      "blockType": "sendToGuardianBlock",
      "tag": "save_project_auto_hedera",
      "permissions": ["Project_Proponent"],
      "dataSource": "hedera",
      "documentType": "vc",
      "topic": "Project",
      "entityType": "project",
      "options": [
        {
          "name": "status",
          "value": "Waiting to be Added"
        }
      ]
    }
    {
      "blockType": "interfaceDocumentsSourceBlock",
      "tag": "project_grid_verra",
      "permissions": ["OWNER"],
      "uiMetaData": {
        "fields": [
          {
            "title": "Summary",
            "name": "document.credentialSubject.0.project_details.G5",
            "type": "text"
          },
          {
            "title": "Status",
            "name": "option.status",
            "type": "text",
            "width": "150px"
          },
          {
            "title": "Add",
            "name": "add",
            "type": "block",
            "bindBlock": "add_project",
            "bindGroup": "project_grid_verra_waiting_to_add_projects",
            "width": "150px"
          },
          {
            "title": "Document",
            "name": "document",
            "type": "button",
            "action": "dialog",
            "dialogContent": "VC",
            "dialogType": "json",
            "content": "View Document",
            "uiClass": "link",
            "width": "150px"
          }
        ]
      },
      "children": [
        {
          "blockType": "documentsSourceAddon",
          "tag": "project_grid_verra_waiting_to_add_projects",
          "dataType": "vc-documents",
          "schema": "#9122bbd0-d96e-40b1-92f6-7bf60b68137c",
          "filters": [
            {
              "field": "option.status",
              "type": "equal",
              "value": "Waiting to be Added"
            },
            {
              "field": "type",
              "type": "equal",
              "value": "project"
            }
          ]
        }
      ]
    }
    {
      "title": "Project Name",
      "name": "document.credentialSubject.0.field0",
      "type": "text",
      "width": "200px"
    }
    {
      "title": "Document",
      "name": "document",
      "type": "button",
      "action": "dialog",
      "dialogContent": "VC",
      "dialogType": "json",
      "content": "View Document"
    }
    {
      "title": "Operations",
      "name": "option.status",
      "type": "block",
      "bindBlock": "approve_documents_btn",
      "bindGroup": "vvb_grid_verra_documents_to_approve"
    }
    {
      "blockType": "customLogicBlock",
      "tag": "automatic_report",
      "permissions": ["Project_Proponent"],
      "expression": "const document = documents[0].document;\n// Emission reduction calculation code\ndone(adjustValues(document.credentialSubject[0]));"
    }
    // Wetland restoration emission reduction calculation
    function calculateEmissionReductions() {
        const document = documents[0].document;
        const creds = document.credentialSubject;
    
        let totalVcus = 0;
    
        for (const cred of creds) {
            for (const instance of cred.project_data_per_instance) {
    
                // Get project parameters
                const data = instance.project_instance;
                const creditingPeriod = data.individual_parameters.crediting_period;
                const bufferPercentage = data.individual_parameters['buffer_%'];
                const allowableUncert = data.individual_parameters.allowable_uncert;
    
                // Process baseline emissions (GHG_BSL)
                processBaselineEmissions(
                    data.baseline_emissions,
                    creditingPeriod,
                    data.monitoring_period_inputs,
                    data.temporal_boundary
                );
    
                // Process project emissions (GHG_WPS)
                processProjectEmissions(
                    data.project_emissions,
                    data.individual_parameters.gwp_ch4,
                    data.individual_parameters.gwp_n2o
                );
    
                // Calculate SOC maximum deduction
                SOC_MAX_calculation(
                    data.baseline_emissions,
                    data.peat_strata_input_coverage_100_years,
                    data.temporal_boundary,
                    data.ineligible_wetland_areas
                );
    
                // Net emission reductions and VCU calculation
                processNETERR(
                    data.baseline_emissions,
                    data.project_emissions,
                    data.net_ERR,
                    data.ineligible_wetland_areas.SOC_MAX,
                    bufferPercentage
                );
    
                totalVcus += data.net_ERR.total_VCU_per_instance;
            }
            cred.total_vcus = totalVcus;
        }
    
        done(adjustValues(document.credentialSubject[0]));
    }
    
    calculateEmissionReductions();
    {
      "blockType": "documentValidatorBlock",
      "tag": "validate_monitoring_report",
      "permissions": ["VVB"],
      "schemaId": "#monitoring-schema-uuid",
      "conditions": [
        {
          "field": "monitoring_period_days",
          "condition": ">=",
          "value": 365
        }
      ]
    }
    {
      "blockType": "switchBlock",
      "tag": "validation_decision",
      "permissions": ["VVB"],
      "conditions": [
        {
          "field": "validation_result",
          "value": "Approved",
          "condition": "equal"
        }
      ]
    }
    {
      "blockType": "mintDocumentBlock",
      "tag": "mintToken",
      "permissions": ["OWNER"],
      "rule": "net_GHG_emissions_reductions_and_removals.NERRWE",
      "tokenId": "66754448-ac59-4758-bc43-b075334daced",
      "accountType": "default"
    }
    {
      "policyTokens": [
        {
          "templateTokenTag": "VCU",
          "tokenName": "Verified Carbon Unit",
          "tokenSymbol": "VCU",
          "decimals": ""
        }
      ]
    }
    {
      "blockType": "tokenActionBlock",
      "tag": "transfer_tokens",
      "permissions": ["Project_Proponent"],
      "action": "transfer",
      "useTemplate": true
    }
    {
      "blockType": "retirementDocumentBlock",
      "tag": "retire_tokens",
      "permissions": ["Project_Proponent"],
      "templateTokenTag": "VCU",
      "rule": "document.credentialSubject.0.retirement_amount"
    }
    {
      "blockType": "interfaceContainerBlock",
      "tag": "main_container",
      "permissions": ["Project_Proponent"],
      "uiMetaData": {"type": "tabs"},
      "children": [
        {
          "tag": "projects_tab",
          "title": "My Projects"
        },
        {
          "tag": "reports_tab",
          "title": "Monitoring Reports"
        }
      ]
    }
    {
      "blockType": "policyRolesBlock",
      "tag": "choose_role",
      "permissions": ["NO_ROLE"],
      "roles": ["Project_Proponent", "VVB"],
      "uiMetaData": {
        "title": "Choose Your Role",
        "description": "Select your participation role in this methodology"
      }
    }
    {
      "blockType": "buttonBlock",
      "tag": "approve_documents_btn",
      "permissions": ["OWNER"],
      "uiMetaData": {
        "buttons": [
          {
            "tag": "Button_0",
            "name": "Approve",
            "type": "selector",
            "field": "option.status",
            "value": "APPROVED",
            "uiClass": "btn-approve"
          },
          {
            "tag": "Button_1",
            "name": "Reject",
            "type": "selector-dialog",
            "title": "Reject",
            "description": "Enter reject reason",
            "field": "option.status",
            "value": "REJECTED",
            "uiClass": "btn-reject"
          }
        ]
      }
    }
    {
      "events": [
        {
          "target": "update_approve_document_status",
          "source": "approve_documents_btn",
          "input": "RunEvent",
          "output": "Button_0"
        },
        {
          "target": "update_approve_document_status_2",
          "source": "approve_documents_btn",
          "input": "RunEvent",
          "output": "Button_1"
        }
      ]
    }
    {
      "events": [
        {
          "source": "source_block_tag",
          "target": "destination_block_tag",
          "input": "RunEvent",
          "output": "RefreshEvent",
          "actor": "owner"
        }
      ]
    }
    {
      "source": "new_project",
      "target": "save_new_project",
      "input": "RunEvent",
      "output": "RunEvent"
    }
    {
      "source": "save_new_project",
      "target": "project_grid",
      "input": "RefreshEvent",
      "output": "RunEvent"
    }
    {
      "blockType": "filtersAddon",
      "tag": "filter_project_grid_verra",
      "permissions": ["OWNER"],
      "uiMetaData": {
        "content": "Project Name"
      },
      "type": "dropdown",
      "queryType": "equal",
      "canBeEmpty": true,
      "field": "document.credentialSubject.0.project_details.G5",
      "optionName": "document.credentialSubject.0.project_details.G5",
      "optionValue": "document.credentialSubject.0.project_details.G5",
      "children": [
        {
          "blockType": "documentsSourceAddon",
          "dataType": "vc-documents",
          "schema": "#55df4f18-d3e5-4b93-af87-703a52c704d6",
          "filters": []
        }
      ]
    }
    {
      "blockType": "documentsSourceAddon",
      "tag": "project_grid_verra_waiting_to_add_projects",
      "permissions": ["OWNER"],
      "dataType": "vc-documents",
      "schema": "#55df4f18-d3e5-4b93-af87-703a52c704d6",
      "filters": [
        {
          "field": "option.status",
          "type": "equal",
          "value": "Waiting to be Added"
        },
        {
          "field": "type",
          "type": "equal",
          "value": "project"
        }
      ]
    }
    Schema Field: "project_title" → Form Input: Text field with validation
    Schema Field: "baseline_emissions" → Form Input: Number field with units
    Schema Field: "monitoring_frequency" → Form Input: Dropdown selection
    // In customLogicBlock
    const baseline = document.baseline_emissions_total;
    const project = document.project_emissions_measured;
    const leakage = document.leakage_emissions_calculated;

    Chapter 15: VM0033 Implementation Deep Dive

    Complete end-to-end analysis of VM0033 tidal wetland restoration policy implementation in Guardian

    Chapter 14 covered individual workflow blocks. Chapter 15 dissects VM0033's complete policy implementation, showing how blocks connect into multi-stakeholder certification workflows that automate the entire lifecycle from project submission to VCU token issuance.

    VM0033 represents Guardian's most advanced and production-ready methodology implementation, featuring complex emission calculations, multi-role workflows, and state management across the complete credit certification process.

    VM0033 Policy Editor Overview

    VM0033 Architecture Overview

    Policy Structure and Organization

    VM0033 follows Guardian's hierarchical block organization:

    Key Policy Configuration

    VM0033's policy metadata defines its scope and stakeholders:

    Role-Based Navigation Structure

    VM0033 implements role-based navigation enabling each stakeholder type to access relevant workflow sections:

    Figure 15.2: VM0033's role-based navigation configuration

    OWNER (Standard Registry) Navigation:

    • VVB Management and Approval

    • Project Pipeline Management

    • Monitoring Reports Review

    • Validation & Verification Oversight

    Project_Proponent Navigation:

    • Project Creation and Management

    • Monitoring Report Submission

    • VVB Assignment and Communication

    • Token Tracking and Management

    VVB Navigation:

    • Registration and Credential Management

    • Project Validation Assignments

    • Monitoring Report Verification

    • Validation/Verification Report Submission

    Use Case 1: VVB Approval Workflow Deep Dive

    Let's examine VM0033's VVB approval workflow as our first detailed use case. This workflow demonstrates how Guardian's interfaceDocumentsSourceBlock, documentsSourceAddon, buttonBlock, and status update mechanisms work together to create an advanced approval system.

    The VVB Approval Interface Architecture

    When a Standard Registry (OWNER) needs to approve VVB registrations, VM0033 creates an interface with three different document states and interactive approval controls.

    1. Main Interface Block Configuration

    The VVB approval interface starts with an interfaceDocumentsSourceBlock that defines the UI layout:

    How This Creates the UI:

    • Owner Column: Shows the DID of who submitted the VVB registration (document.issuer)

    • Text Column: Displays the first field from the VVB registration form (document.credentialSubject.0.field0)

    • Operation Column: Contains approval/rejection buttons (when bindGroup matches "documents_to_approve")

    2. Document Source Configuration - The Filtering Engine

    Three separate documentsSourceAddon blocks are used to populate VVB documents with different statuses in the same interface:

    A. Documents Waiting for Approval:

    B. Approved Documents:

    C. Rejected Documents:

    How the Filtering Works:

    • Multiple Filters = AND Logic: type = "vvb" AND option.status = "Waiting for Approval"

    • defaultActive: Only "waiting for approval" shows immediately (defaultActive: true), others show when status changes

    • Schema Filtering: All use the same VVB registration schema (#41db8188-04c1-4f57-b73e-4b7d2efc797c

    3. Button Workflow Implementation

    The approval buttons are defined in a separate buttonBlock that gets embedded in the interface:

    Button Behavior Differences:

    • Approve Button (Button_0):

      • Type: "selector" = direct action

      • Sets option.status = "APPROVED" immediately

    4. Status Update Processing

    When buttons are clicked, Guardian routes events to status update blocks:

    What Happens During Status Update:

    1. Button Click: User clicks "Approve" or "Reject"

    2. Event Trigger: Button emits Button_0 or Button_1 events

    3. Event Routing: Guardian routes to corresponding update_approve_document_status block

    Complete VVB Approval Flow Summary

    Initial State:

    • VVB submits registration → Document created with type: "vvb", option.status: "Waiting for Approval"

    • Document appears in "documents to approve" filter with Approve/Reject buttons

    Approval Flow:

    • OWNER clicks "Approve" → option.status changes to "APPROVED" → Document type changes to "approved_vvb"

    • Document disappears from "waiting for approval" and appears in "approved documents" with Revoke button

    Rejection Flow:

    • OWNER clicks "Reject" → Dialog opens for reason → option.status changes to "REJECTED" → Document type changes to "rejected_vvb"

    • Document disappears from "waiting for approval" and appears in "rejected documents" section

    This was one simple example of how Guardian's block system can create powerful, multi-state workflows with automatic UI updates and proper audit trails.


    Use Case 2: Project Submission and Calculation Workflow Deep Dive

    Let's examine how Project_Proponents submit PDDs and how VM0033 processes them through form generation, data storage, and calculation integration. This workflow showcases Guardian's ability to transform schemas into working forms and process complex scientific data.

    The Project Submission Architecture

    When Project_Proponents create new projects, VM0033 transforms your Part III PDD schema into a working form, processes the submission through automated calculations, and stores the results for validation workflows.

    1. Project Submission Form Block

    The project submission starts with a requestVcDocumentBlock that generates forms from schema:

    How This Creates the Project Form:

    • Schema Integration: Guardian reads the PDD schema (#55df4f18-d3e5-4b93-af87-703a52c704d6) from Part III and automatically generates form fields

    • Dialog Type: Opens as modal dialog (type: "dialog") with title "New project"

    • UUID Generation: Creates unique project identifier (idType: "UUID")

    2. Dual Storage Strategy Implementation

    A two-path storage strategy is used after form submission:

    A. Database Storage (Working Documents):

    B. Hedera Storage (on chain):

    Storage Strategy Differences:

    Aspect
    Database Storage
    Hedera Storage

    3. Event-Driven Calculation Processing

    After database storage, VM0033 triggers tool calculations:

    Calculation Engine Integration:

    The policy includes customLogicBlock elements that process the submitted PDD data and output final emission reduction calculations:

    Calculation Processing Overview:

    • Temporal Boundary Calculations: Peat depletion times and soil organic carbon depletion periods

    • Baseline Emissions: CO2, CH4, and N2O emissions for each monitoring year and stratum

    • Project Emissions: Project scenario emissions using same methodology approach

    • Carbon Stock Analysis: Total stock approach vs stock loss approach determination

    Note: The complete calculation implementation is covered in detail in Part V (Calculation Logic).

    Complete Project Submission Flow Summary

    Step 1: Form Generation

    • Project_Proponent clicks "New Project" → Guardian generates form from PDD schema

    • Form includes all project details, baseline parameters, monitoring specifications from Part III

    Step 2: Data Submission

    • User completes form → requestVcDocumentBlock creates Verifiable Credential

    • Document contains all submitted data plus generated UUID identifier

    Step 3: Storage Processing

    • sendToGuardianBlock saves document to database for processing

    • stopPropagation: true prevents immediate Hedera storage (cost optimization)

    Step 4: Calculation Processing

    • Event routing triggers calculation engine (customLogicBlock)

    • JavaScript processes all emission calculations using VM0033 methodology formulas

    • Calculated results added to original document structure

    Step 5: Document Storage

    • Enhanced document (original + calculations) stored in database

    • Document ready for validation assignment and approval workflows

    • Later moved to Hedera after validation approval

    Key Technical Insights:

    1. Schema-to-Form Integration: Guardian automatically creates complex forms from JSON Schema definitions, eliminating manual UI development

    2. Event-Driven Processing: Form submission triggers calculation workflows through Guardian's event system, enabling sophisticated processing chains

    3. Cost-Optimized Storage: Working documents in database, final documents on blockchain optimizes cost while maintaining audit integrity

    4. Data Enhancement

    This demonstrates how VM0033 transforms simple form submissions into scientifically processed project documents ready for carbon credit certification workflows.


    OWNER (Standard Registry) Role Workflow

    The OWNER represents the Standard Registry (Verra) and manages the overall certification program. VM0033 implements their workflow through a tabbed interface that organizes different operational areas.

    Verra Header Structure

    The OWNER interface uses VM0033's Verra_header container that creates a tabbed navigation system:

    OWNER Navigation Tabs:

    • Approve VVB: VVB registration management (detailed in Use Case 1)

    • Projects Pipeline: Project listing and status management

    • Monitoring Reports: Report review and approval workflows

    • Validation & Verification: Oversight of VVB activities

    1. VVB Management (approve_VVB)

    VVB Approval Workflow: Detailed in Use Case 1, this section manages the complete VVB lifecycle from registration through approval and ongoing management.

    2. Project Pipeline Management

    Project Status Oversight: OWNER reviews all project submissions, approvals, and workflow progression across all Project_Proponents.

    3. Monitoring Reports Review

    Report Validation: OWNER has oversight access to all monitoring reports and can review calculation accuracy and methodology compliance.

    4. Validation & Verification Oversight

    VVB Performance Monitoring: OWNER tracks VVB validation and verification activities, ensuring quality and compliance across all assignments.

    5. Token Management and Trust Chains

    VCU Issuance Control: OWNER controls final token minting decisions and maintains complete audit trails for all issued carbon credits.


    Project_Proponent Role Workflow

    The Project_Proponent drives the main certification workflow from project creation through monitoring report submission. VM0033 policy provides them with a dedicated header container and navigation structure.

    Project_Proponent Header Structure

    The Project_Proponent interface uses VM0033's Project_Proponent_header container:

    Project_Proponent Navigation Structure:

    • Projects: Project creation and management (Projects_pp)

    • Create Project: Project submission workflow

    • Monitoring Reports: Report submission (Monitoring_Reports_pp)

    1. Projects Section (project_grid_pp_2)

    Document Display: Shows all projects owned by the current Project_Proponent

    • Filtering: Uses onlyOwnDocuments: true to show only user's projects

    • Status Tracking: Displays project progression through certification stages

    • Action Buttons: "New Project" for submissions, status-specific actions

    2. New Project Submission (new_project)

    Form Generation: The requestVcDocumentBlock creates forms from PDD schema (covered in Use Case 2)

    • Schema Reference: #55df4f18-d3e5-4b93-af87-703a52c704d6 (Part III PDD schema)

    • Processing Chain: Form → Database Storage → Calculation Engine → Enhanced Document

    • Status Assignment: New projects get "Waiting to be Added" status

    3. VVB Assignment (assign_vvb)

    Assignment Interface: After admin approval, projects become available for VVB assignment

    Assignment Process:

    • Lists approved VVBs available for selection

    • Assignment triggers reassigningBlock to transfer document ownership

    • Project status changes to "Assigned for Validation"

    • VVB gains access to project in their workflow

    4. Monitoring Reports Section (new_report)

    Report Submission: Uses same schema-to-form pattern as project submission

    • Schema Reference: Monitoring report schema from Part III

    • Calculation Integration: Triggers emission reduction calculations

    • VVB Workflow: Submitted reports appear in VVB's verification queue

    5. Validation & Verification Tracking

    Status Monitoring: Project_Proponent tracks validation and verification progress

    • Validation Status: Shows when VVB completes validation process

    • Verification Status: Displays monitoring report verification results

    • Communication: Receives feedback and requests for additional information

    6. Token Management

    VCU Receipt: Final step where Project_Proponent receives issued carbon credits

    • Token Display: Shows minted VCUs with quantity and metadata

    • Transfer Capability: Can transfer or retire tokens as needed

    • Audit Trail: Complete history from project submission to token receipt


    VVB Role Workflow

    VVBs provide independent validation and verification services. VM0033 policy structures their workflow through a dedicated header container with role-specific navigation.

    VVB Header Structure

    The VVB interface uses VM0033's VVB_Header container:

    VVB Navigation Structure:

    • VVB Documents: Registration and credential management

    • Projects: Project validation assignments (Projects_vvb)

    • Monitoring Reports: Report verification (Monitoring_Reports_vvb)

    1. VVB Registration (new_VVB)

    Registration Process: VVBs must register and receive approval before accessing assignments

    • Form Submission: create_new_vvb block generates registration form

    • Dual Storage: Initial database storage, then Hedera after approval

    • Approval Workflow: Goes through OWNER approval process (Use Case 1)

    2. VVB Documents Dashboard

    Document Management: Central hub for all VVB-related documents

    • Status Filtering: Documents filtered by approval status

    • Action Items: Shows pending approvals and active assignments

    • Historical Records: Access to completed validation/verification work

    3. Project Assignments (Projects_vvb)

    Assignment Interface: Shows projects assigned to the VVB for validation

    Validation Process:

    • Review project documentation and calculations

    • Conduct site visits and stakeholder interviews

    • Submit validation report with approve/reject decision

    • Update project status based on validation outcome

    4. Monitoring Report Verification (Monitoring_Reports_vvb)

    Verification Queue: Lists monitoring reports requiring verification

    • Document Filter: Shows reports assigned to current VVB

    • Calculation Review: Verify emission reduction calculations

    • Field Verification: Confirm monitoring data accuracy

    • Verification Decision: Approve or request corrections

    5. Validation & Verification Reports (Validation_and_Verification_vvb)

    Report Submission: VVBs submit detailed validation and verification reports

    • Validation Reports: Document project eligibility and methodology compliance

    • Verification Reports: Confirm monitoring data accuracy and calculations

    • Status Updates: Reports trigger project status changes upon submission

    6. Minting Events Participation

    Token Issuance: VVBs participate in final token minting decisions

    • Final Review: Last verification before token issuance

    • Minting Approval: Confirm readiness for VCU generation

    • Audit Trail: Complete validation/verification history attached to tokens


    End-to-End Workflow Integration

    VM0033's real power emerges from connecting individual role workflows into seamless automation. Here's how documents flow through the complete certification process:

    Phase 1: Project Onboarding

    1. VVB Registration → OWNER approval → VVB activation

    2. Project Submission → Calculation processing → Admin review → Project listing

    Phase 2: Validation Assignment

    1. Project_Proponent assignment → VVB assignment → Ownership transfer

    2. VVB validation → Site visits → Validation report → Project approval

    Phase 3: Monitoring and Verification

    1. Monitoring report submission → Calculation updates → VVB verification

    2. Verification completion → Status updates → Token minting preparation

    Phase 4: Token Issuance

    1. OWNER final review → Token minting → VCU distribution → Audit trail completion

    Key Integration Patterns

    Document State Management:

    • Status-driven filtering ensures users see relevant documents

    • Automatic UI updates when document states change

    • Complete audit trails from submission to token issuance

    Role-Based Access Control:

    • Each role sees only relevant workflow sections

    • Permission-based document access and modification rights

    • Secure information isolation between stakeholders

    Event-Driven Processing:

    • Form submissions trigger calculation engines

    • Status changes propagate across all stakeholder interfaces

    • Automated notifications keep participants informed

    Cost-Optimized Storage:

    • Database storage for working documents and calculations

    • Hedera storage for validated, final documents

    • Strategic blockchain usage minimizes transaction costs

    Calculation Integration:

    • Schema submissions automatically trigger methodology calculations

    • Enhanced documents contain both original data and calculated results

    • Consistent calculation logic across project and monitoring phases

    This end-to-end integration creates a seamless experience where stakeholders focus on their expertise while Guardian handles workflow coordination, document routing, and audit trail generation automatically.


    Key Implementation Takeaways

    1. Role-Based Interface Design

    VM0033 succeeds through clear separation of stakeholder interfaces. Each role sees only relevant documents and actions, reducing complexity while maintaining complete audit trails.

    2. Document Lifecycle Management

    Status-driven filtering automatically routes documents to appropriate stakeholders at each certification stage, eliminating manual coordination overhead.

    3. Schema-Driven Development

    Form generation from JSON schemas enables rapid methodology adaptation while ensuring data consistency across all workflow stages.

    4. Event-Driven Architecture

    Guardian's event system coordinates between roles without tight coupling, enabling flexible workflow modifications and easy extension for additional stakeholder types.

    5. Cost-Optimized Blockchain Integration

    Strategic use of database storage for working documents and Hedera storage for final records optimizes costs while maintaining audit integrity.

    Practical Implementation Guidance

    For Your Methodology Implementation:

    1. Start with VM0033 as Foundation: Import VM0033.policy, replace schemas with your Part III designs, then modify role workflows and calculation logic.

    2. Map Stakeholder Workflows First: Define your specific stakeholder roles and their document review processes before implementing detailed block configurations.

    3. Design Status Progression: Plan document status values and transitions to drive automatic workflow routing between stakeholder roles.

    4. Implement Role Sections: Create navigation sections for each stakeholder role, ensuring users see only relevant documents and actions.

    5. Test Complete Workflows: Validate end-to-end document flows from initial submission through final token issuance with realistic test data.


    Advanced Implementation Patterns

    Navigation Structure Implementation

    VM0033's navigation structure from the policy configuration drives the role-based interface organization:

    Navigation Level System:

    • Level 1: Primary navigation tabs (main sections)

    • Level 2: Sub-sections within primary tabs

    • Block Mapping: Each navigation item maps to specific workflow blocks

    Container Block Hierarchy

    VM0033's container organization creates the role-based workflow structure:

    Role-Based Project Interface Implementation

    Each role sees different views of the same project data through permission-based filtering:

    Project_Proponent Project View:

    VVB Project View:

    Key Interface Differences:

    • Project_Proponent: Shows assign button, status text, focuses on project management

    • VVB: Shows operation buttons for approval/rejection actions

    • OWNER: Shows all projects with administrative oversight capabilities

    Document Filtering and Status Management

    VM0033 uses advanced filtering to show role-appropriate documents:

    Project_Proponent Filter (Own Documents Only):

    VVB Filter (Assigned Documents):

    OWNER Filter (All Documents):

    Status Progression Management:

    VM0033 manages document status through workflow stages:

    1. Project_Proponent Submission: "Waiting to be Added"

    2. OWNER Approval: "Approved for Assignment"

    3. VVB Assignment: "Assigned for Validation"

    Each status change triggers automatic document filtering updates across all user interfaces.

    Token Management Implementation

    VM0033's token management connects calculation results to VCU issuance:

    Token Minting Process:

    1. Calculation Completion: customLogicBlock calculates final emission reductions

    2. Verification Approval: VVB confirms calculation accuracy

    3. OWNER Review: Final administrative approval

    4. Token Minting: VCUs issued based on calculated emission reductions


    Summary: VM0033 Policy Implementation

    Chapter 15 demonstrated how VM0033 transforms Guardian's block system into production-ready multi-stakeholder workflows. Through detailed analysis of VVB approval workflows, project submission processes, and role-based interfaces, we examined how JSON configurations create working certification systems.

    Key Technical Achievements:

    1. Role-Based Architecture: Each stakeholder (OWNER, Project_Proponent, VVB) receives tailored interfaces with appropriate permissions and document filtering

    2. Event-Driven Coordination: Button clicks trigger status updates that automatically refresh filtered document views across all user interfaces

    3. Schema-Driven Form Generation: Part III schemas automatically generate working forms with calculation integration

    4. Cost-Optimized Storage

    VM0033 policy demonstrates Guardian's ability to implement complex environmental methodologies as automated workflows. The policy serves as both a working carbon credit system and a template for implementing other methodologies using similar patterns.

    Implementation Readiness: VM0033's patterns directly apply to your methodology implementation. The role structures, document filtering, and workflow coordination patterns adapt to different stakeholder arrangements and certification requirements.


    Next Steps: Chapter 16 explores advanced policy patterns including multi-methodology support, external data integration, and production optimization techniques using VM0033's proven implementation as a foundation.

    Prerequisites Check: Ensure you have:

    Time Investment: ~45 minutes reading + ~120 minutes hands-on VM0033 analysis and workflow tracing

    Practical Exercises:

    1. VM0033 Workflow Tracing: Follow a complete project lifecycle through VM0033's policy editor

    2. Calculation Analysis: Examine VM0033's emission calculation engine and map to your methodology

    3. Role Simulation: Test VM0033 workflows from each stakeholder perspective (OWNER, Project_Proponent, VVB)

    4. Event Flow Mapping: Trace key event connections that drive VM0033's automated workflows

    Trust Chain and Token History

    Document Column: "View Document" link that opens dialog with full VVB registration details

  • Revoke Column: Revoke button (only for approved VVBs when bindGroup matches "documents_approved")

  • Status Column: Plain text showing current status

  • )
  • Tag Matching: The bindGroup in interface fields matches these tag values to show appropriate buttons

  • Triggers event to update_approve_document_status block

  • Green styling (btn-approve)

  • Reject Button (Button_1):

    • Type: "selector-dialog" = shows dialog first

    • Opens modal with title "Reject" and prompt "Enter reject reason"

    • User input gets captured before setting option.status = "REJECTED"

    • Triggers event to update_approve_document_status_2 block

    • Red styling (btn-reject)

  • Document Update: sendToGuardianBlock updates document in database with new status

  • Filter Refresh: Document moves between filtered views automatically

  • UI Update: Interface refreshes to show updated document lists

  • Empty Presets: No pre-populated fields (presetFields: []), users fill all data manually. They can be set as needed.

  • Permission Control: Only Project_Proponents can access this form

  • stopPropagation

    true (prevents further processing)

    false (continues workflow)

    Net Emission Reductions: Final emission reductions/removals calculations

    : Original submissions enhanced with calculated results, maintaining both user input and processed outputs in single document
  • Workflow Preparation: Processed documents ready for multi-stakeholder validation workflows with complete calculation results

  • Trust Chain History: Complete audit trail management

  • Token Management: VCU issuance and token operations

  • Validation & Verifications: Status tracking (Validation_and_Verification_PP)
  • Tokens: VCU receipt and management

  • Validation & Verifications: Report submission (Validation_and_Verification_vvb)

    VVB Validation: "Validated"

  • Monitoring Submission: "Under Verification"

  • VVB Verification: "Verified"

  • Token Minting: "Credited"

  • Transfer to Project_Proponent: Tokens transferred to project developer

  • Trust Chain Generation: Complete audit trail created and stored on Hedera

  • : Strategic use of database vs Hedera storage minimizes blockchain costs while maintaining audit integrity
  • Document Lifecycle Management: Status-driven filtering routes documents through certification stages without manual coordination

  • Purpose

    Working documents, calculations, internal processing

    Final validated documents, immutable records

    Cost

    Free

    HBAR transaction costs

    Speed

    Instant

    3-5 seconds for consensus

    Use Case

    Draft submissions, calculation processing

    Navigation structure configuration
    Actual render in dry run
    VVB approval flow under Verra header
    VVB Approval Grid Interface
    Reject dialog showing reason input field
    New project submission flow
    Project Submission Dialog
    OWNER Verra Interface
    Project_Proponent Projects Grid
    Projects List UI
    VVB Interface

    Validated projects, audit trails

    Root Container (interfaceContainerBlock)
    ├── Role Selection (policyRolesBlock)
    ├── OWNER Workflow (Standard Registry)
    │   ├── VVB Management
    │   ├── Project Pipeline
    │   ├── Monitoring Reports
    │   ├── Validation & Verification
    │   └── Token Management
    ├── Project_Proponent Workflow
    │   ├── Project Management
    │   ├── Monitoring Reports
    │   ├── VVB Selection
    │   └── Token Tracking
    └── VVB Workflow
        ├── Registration
        ├── Project Validation
        ├── Report Verification
        └── Document Management
    {
      "name": "VM0033-v1.0.3_8_14",
      "description": "This methodology outlines procedures for estimating net greenhouse gas (GHG) emission reductions and removals from tidal wetland restoration projects...",
      "policyRoles": ["Project_Proponent", "VVB"],
      "policyTokens": [{
        "templateTokenTag": "VCU",
        "tokenName": "Verified Carbon Unit",
        "tokenSymbol": "VCU"
      }],
      "codeVersion": "1.5.1"
    }
    {
      "id": "14d69df0-bb61-4c65-abaf-4507cde54521",
      "blockType": "interfaceDocumentsSourceBlock",
      "defaultActive": true,
      "permissions": ["OWNER"],
      "tag": "vvb_grid_verra",
      "uiMetaData": {
        "fields": [
          {
            "title": "Owner",
            "name": "document.issuer",
            "type": "text"
          },
          {
            "title": "Text",
            "name": "document.credentialSubject.0.field0",
            "type": "text"
          },
          {
            "title": "Operation",
            "name": "option.status",
            "type": "block",
            "bindBlock": "approve_documents_btn",
            "width": "250px",
            "bindGroup": "vvb_grid_verra_documents_to_approve"
          },
          {
            "title": "Document",
            "name": "document",
            "type": "button",
            "action": "dialog",
            "dialogContent": "VC",
            "dialogType": "json",
            "content": "View Document",
            "uiClass": "link"
          },
          {
            "title": "Revoke",
            "name": "",
            "type": "block",
            "bindBlock": "revoke_vvb_verra_btn",
            "bindGroup": "vvb_grid_verra_documents_approved",
            "width": "100px"
          },
          {
            "title": "Operation",
            "name": "option.status",
            "type": "text",
            "width": "250px"
          }
        ]
      }
    }
    {
      "id": "e206551f-d96a-4b4f-b2a5-3f12182cbd67",
      "blockType": "documentsSourceAddon",
      "defaultActive": true,
      "permissions": ["OWNER"],
      "filters": [
        {
          "value": "vvb",
          "field": "type",
          "type": "equal"
        },
        {
          "value": "Waiting for Approval",
          "field": "option.status",
          "type": "equal"
        }
      ],
      "dataType": "vc-documents",
      "schema": "#41db8188-04c1-4f57-b73e-4b7d2efc797c",
      "tag": "vvb_grid_verra_documents_to_approve"
    }
    {
      "id": "18d1f380-77d3-49bb-aaa0-09a9dbe29d9c",
      "blockType": "documentsSourceAddon",
      "defaultActive": false,
      "permissions": ["OWNER"],
      "filters": [
        {
          "value": "approved_vvb",
          "field": "type",
          "type": "equal"
        }
      ],
      "dataType": "vc-documents",
      "schema": "#41db8188-04c1-4f57-b73e-4b7d2efc797c",
      "tag": "vvb_grid_verra_documents_approved"
    }
    {
      "id": "eb1ee4f5-9b3c-4350-b5a8-516bbea728c8",
      "blockType": "documentsSourceAddon",
      "defaultActive": false,
      "permissions": ["OWNER"],
      "filters": [
        {
          "value": "rejected_vvb",
          "field": "type",
          "type": "equal"
        }
      ],
      "dataType": "vc-documents",
      "schema": "#41db8188-04c1-4f57-b73e-4b7d2efc797c",
      "tag": "vvb_grid_verra_documents_approved_rejected"
    }
    {
      "id": "95890b13-cc6f-4d03-afde-323c6337498d",
      "blockType": "buttonBlock",
      "defaultActive": false,
      "permissions": ["OWNER"],
      "tag": "approve_documents_btn",
      "uiMetaData": {
        "buttons": [
          {
            "tag": "Button_0",
            "name": "Approve",
            "type": "selector",
            "field": "option.status",
            "value": "APPROVED",
            "uiClass": "btn-approve"
          },
          {
            "tag": "Button_1",
            "name": "Reject",
            "type": "selector-dialog",
            "title": "Reject",
            "description": "Enter reject reason",
            "field": "option.status",
            "value": "REJECTED",
            "uiClass": "btn-reject"
          }
        ]
      },
      "events": [
        {
          "target": "update_approve_document_status",
          "source": "approve_documents_btn",
          "input": "RunEvent",
          "output": "Button_0",
          "disabled": false
        },
        {
          "target": "update_approve_document_status_2",
          "source": "approve_documents_btn",
          "input": "RunEvent",
          "output": "Button_1",
          "disabled": false
        }
      ]
    }
    {
      "id": "a95abd22-d952-4471-aacc-af159704aefe",
      "blockType": "sendToGuardianBlock",
      "defaultActive": false,
      "permissions": ["VVB"],
      "entityType": "vvb",
      "dataSource": "database",
      "documentType": "vc",
      "tag": "update_approve_document_status"
    }
    {
      "id": "aaa78a11-c00b-4669-9022-bd2971504d70",
      "blockType": "requestVcDocumentBlock",
      "defaultActive": true,
      "permissions": ["Project_Proponent"],
      "uiMetaData": {
        "privateFields": [],
        "type": "dialog",
        "content": "New project",
        "dialogContent": "New project",
        "description": "New project"
      },
      "idType": "UUID",
      "schema": "#55df4f18-d3e5-4b93-af87-703a52c704d6",
      "presetFields": [],
      "tag": "add_project_bnt"
    }
    {
      "id": "574168a5-ae6b-4736-8570-2fad76413915",
      "blockType": "sendToGuardianBlock",
      "defaultActive": false,
      "permissions": ["Project_Proponent"],
      "entityType": "project_form",
      "dataSource": "database",
      "documentType": "vc",
      "stopPropagation": true,
      "tag": "save_project"
    }
    {
      "id": "2b8e4132-1c5e-49fc-b6aa-26b40b7c23b5",
      "blockType": "sendToGuardianBlock",
      "defaultActive": false,
      "permissions": ["Project_Proponent"],
      "dataSource": "hedera",
      "documentType": "vc",
      "topic": "Project",
      "entityType": "project_form",
      "tag": "save_project_hedera"
    }
    {
      "events": [
        {
          "target": "AR_tool_14_project",
          "source": "save_project",
          "input": "input_ar_tool_14",
          "output": "RunEvent"
        }
      ]
    }
    {
      "id": "4b02c4d7-faec-4519-a1b1-ab1b9353ea9b",
      "blockType": "customLogicBlock",
      "defaultActive": false,
      "permissions": ["Project_Proponent"],
      "expression": "[1000+ lines of VM0033 methodology calculations - detailed in Part V]"
    }
    {
      "id": "62628fc5-d96d-4948-9666-5db5c6399f47",
      "blockType": "interfaceContainerBlock",
      "defaultActive": true,
      "uiMetaData": {
        "type": "tabs"
      },
      "permissions": ["OWNER"],
      "tag": "Verra_header",
      "children": [
        {
          "id": "5de9b842-b4e2-498c-b9a4-4321cf39b824",
          "blockType": "interfaceContainerBlock",
          "uiMetaData": {
            "type": "blank",
            "title": "Approve VVB"
          },
          "tag": "approve_VVB"
        }
      ]
    }
    {
      "id": "0509ff35-6011-4945-849a-c39d690f0c8a",
      "blockType": "interfaceContainerBlock",
      "defaultActive": true,
      "uiMetaData": {
        "type": "tabs"
      },
      "permissions": ["Project_Proponent"],
      "tag": "Project_Proponent_header"
    }
    {
      "blockType": "interfaceDocumentsSourceBlock",
      "tag": "vvb_grid_pp",
      "permissions": ["Project_Proponent"],
      "uiMetaData": {
        "fields": [
          {
            "title": "VVB Name",
            "name": "document.credentialSubject.0.field0",
            "type": "text"
          },
          {
            "title": "Assign",
            "type": "block",
            "bindBlock": "assign_vvb_btn"
          }
        ]
      }
    }
    {
      "id": "0c43a214-3fcf-492f-826a-32d2712f6f7f",
      "blockType": "interfaceContainerBlock",
      "defaultActive": true,
      "uiMetaData": {
        "type": "tabs"
      },
      "permissions": ["VVB"],
      "tag": "VVB_Header"
    }
    {
      "blockType": "interfaceDocumentsSourceBlock",
      "tag": "project_grid_vvb",
      "permissions": ["VVB"],
      "filters": [
        {
          "field": "assignedTo",
          "type": "equal",
          "value": "[current VVB DID]"
        },
        {
          "field": "option.status",
          "type": "equal",
          "value": "Assigned for Validation"
        }
      ]
    }
    {
      "policyNavigation": [
        {
          "role": "Project_Proponent",
          "steps": [
            {
              "name": "Projects",
              "block": "Projects_pp",
              "level": 1
            },
            {
              "name": "Create Project",
              "block": "Projects_pp",
              "level": 2
            },
            {
              "name": "Monitoring Reports",
              "block": "Monitoring_Reports_pp",
              "level": 1
            },
            {
              "name": "Validation & Verifications",
              "block": "Validation_and_Verification_PP",
              "level": 1
            }
          ]
        },
        {
          "role": "VVB",
          "steps": [
            {
              "name": "VVB Documents",
              "block": "VVB Documents",
              "level": 1
            },
            {
              "name": "Projects",
              "block": "Projects_vvb",
              "level": 1
            },
            {
              "name": "Monitoring Reports",
              "block": "Monitoring_Reports_vvb",
              "level": 1
            }
          ]
        }
      ]
    }
    Root Container (policyRolesBlock)
    ├── OWNER → Verra_header (tabs)
    │   ├── Approve VVB
    │   ├── Projects Pipeline
    │   ├── Monitoring Reports
    │   ├── Validation & Verification
    │   └── Trust Chain & Tokens
    ├── Project_Proponent → Project_Proponent_header (tabs)
    │   ├── Projects
    │   ├── Monitoring Reports
    │   ├── Validation & Verification
    │   └── Tokens
    └── VVB → VVB_Header (tabs)
        ├── VVB Documents
        ├── Projects
        ├── Monitoring Reports
        └── Validation & Verification
    {
      "id": "021d7c4b-945d-4618-b521-34f24c31fde3",
      "blockType": "interfaceDocumentsSourceBlock",
      "permissions": ["Project_Proponent"],
      "tag": "project_grid_pp_2",
      "uiMetaData": {
        "fields": [
          {
            "title": "Project",
            "name": "document.credentialSubject.0.project_details.G5",
            "type": "text"
          },
          {
            "title": "Status",
            "name": "option.status",
            "type": "text",
            "width": "170px"
          },
          {
            "title": "Assign",
            "name": "assignedTo",
            "type": "block",
            "bindBlock": "assign_vvb_btn",
            "bindGroup": "project_grid_pp_approved_projects"
          }
        ]
      }
    }
    {
      "id": "31d0e29f-b950-4b43-bb0b-58aee9035c0c",
      "blockType": "interfaceDocumentsSourceBlock",
      "permissions": ["VVB"],
      "tag": "project_grid_vvb",
      "uiMetaData": {
        "fields": [
          {
            "title": "Project",
            "name": "document.credentialSubject.0.project_details.G5",
            "type": "text"
          },
          {
            "title": "Operation",
            "name": "option.status",
            "type": "block",
            "bindBlock": "approve_project_btn",
            "width": "250px",
            "bindGroup": "project_grid_vvb_projects"
          }
        ]
      }
    }
    {
      "blockType": "documentsSourceAddon",
      "onlyOwnDocuments": true,
      "filters": [
        {
          "field": "type",
          "type": "equal",
          "value": "project"
        },
        {
          "field": "option.status",
          "type": "equal",
          "value": "Waiting to be Added"
        }
      ]
    }
    {
      "blockType": "documentsSourceAddon",
      "filters": [
        {
          "field": "assignedTo",
          "type": "equal",
          "value": "[current_user_did]"
        },
        {
          "field": "option.status",
          "type": "equal",
          "value": "Assigned for Validation"
        }
      ]
    }
    {
      "blockType": "documentsSourceAddon",
      "filters": [
        {
          "field": "type",
          "type": "equal",
          "value": "project"
        }
      ]
    }
    {
      "blockType": "mintDocumentBlock",
      "tag": "mintToken",
      "permissions": ["OWNER"],
      "rule": "net_GHG_emissions_reductions_and_removals.NERRWE",
      "tokenId": "66754448-ac59-4758-bc43-b075334daced",
      "accountType": "default"
    }

    Chapter 18: Custom Logic Block Development

    Converting methodology equations into executable code using Guardian's customLogicBlock

    This chapter teaches you how to implement methodology calculations as working code that produces accurate emission reductions or removals. You'll learn to translate VM0033's mathematical formulas into executable functions, using the ABC Mangrove's real world data artifact as your validation benchmark. By the end, you'll write code that transforms methodology equations into verified carbon credit calculations.

    Learning Objectives

    After completing this chapter, you will be able to:

    • Translate methodology equations into executable JavaScript or Python code

    • Implement formulas for baseline emissions, project emissions, and net emission reductions

    • Process monitoring data through mathematical models defined in VM0033 methodology

    • Validate equation implementations against Allcot test artifact input/output data

    • Handle data precision and validation requirements for accurate calculations

    • Structure mathematical calculations for production-ready environmental credit systems

    Prerequisites

    • Completed Part IV: Policy Workflow Design and Implementation

    • Understanding of VM0033 methodology and equations from Part I

    • Basic programming knowledge for implementing mathematical formulas (JavaScript or Python)

    • Access to validation artifacts: , , and

    Guardian customLogicBlock: Your Calculation Engine

    The Mathematical Execution Environment

    Guardian's is your calculation engine for environmental methodologies - it's where mathematical equations become executable code. Think of it as a computational engine that processes monitoring data through formulas to produce emission reductions that match methodology equations precisely.

    You can write your calculations in JavaScript or Python - Guardian supports both languages. Most of our examples use JavaScript, but the concepts apply equally to Python.

    Understanding Your Input Data

    Every customLogicBlock receives Guardian documents through arguments[0]. These contain the measured variables and parameters needed for your methodology equations - real data from environmental monitoring. Here's the data structure you'll process through mathematical formulas:

    This is actual data from the ABC Blue Carbon Mangrove Project in Senegal - the same project used in our test case spreadsheet.

    Accessing Data Like a Pro

    Field Access Patterns from Production Code

    Let's look at how VM0033's production code accesses data. These utility functions from make your code clean and readable:

    The ?? operator provides safe defaults when data might be missing.

    Building Your Calculation Engine

    The Main Calculation Function

    Every customLogicBlock starts with a main function that processes the documents. Here's the pattern from VM0033's production code:

    Processing Project Instances

    Each project instance represents a restoration site. The processInstance function is where you implement the methodology calculations:

    Implementing Baseline Emission Equations

    From Methodology Equations to Code

    Baseline emissions implement the scientific equations from VM0033 Section 8.1 - representing the "business as usual" scenario without restoration. Each equation in the methodology PDF more or less becomes a function in your code.

    Example: VM0033 Equation 8.1.1 - Soil CO2 Emissions

    Implementing Project Emission Equations

    Translating VM0033 Section 8.2 Equations

    Project emissions implement equations from VM0033 Section 8.2 - the restoration scenario calculations. These equations typically show reduced emissions and increased sequestration compared to baseline.

    Example: VM0033 Equation 8.2.3 - Project Biomass Change

    Implementing Net Emission Reduction Equations

    VM0033 Section 8.5 - The Final Scientific Calculation

    This implements VM0033's core equation that transforms baseline and project emissions into verified carbon units (VCUs). Each line of code corresponds to specific equations in Section 8.5 of the methodology.

    Example: VM0033 Equation 8.5.1 - Net Emission Reductions

    Handling Real-World Data Challenges

    Defensive Programming Patterns

    Real project data is messy. Projects miss monitoring periods, equipment fails, and data gets corrupted. They might send a different data type than you might expect. Your code needs to handle this gracefully:

    Error Handling

    Validation: Allcot Test Artifact as Your Benchmark

    Ensuring Mathematical Accuracy

    The is your validation benchmark - it contains input parameters and expected output results calculated manually according to VM0033 methodology equations. Your code must reproduce these results exactly to ensure mathematical accuracy.

    Your equation implementations must produce the same results as the manual calculations to be valid.

    Python Alternative

    Writing CustomLogicBlocks in Python

    Guardian also supports Python for customLogicBlock development. The concepts are the same, just different syntax:

    Choose the language you're more comfortable with - both produce identical results.

    Testing Your Code

    Quick Testing Tips

    While Chapter 21 covers comprehensive testing, here are quick validation techniques while you're developing:

    1. Console Logging for Debug

    2. Guardian's Built-in Testing Use Guardian's customLogicBlock testing interface (covered in Chapter 21) to test with real data.

    3. Unit Testing Individual Functions

    Real Results: ABC Mangrove Project

    Production Calculation Results

    Using VM0033's calculation engine with the ABC Blue Carbon Mangrove Project data, here are the actual VCU projections over the 40-year crediting period(data added till 2055 only):

    Year
    VCU Credits
    Year
    VCU Credits
    Year
    VCU Credits
    Year
    VCU Credits

    Total Project Impact: 2,861,923 VCU credits over 40 years

    This demonstrates what your code should produce - substantial carbon credits from mangrove restoration that follow the methodology calculations exactly.

    Deep Dive: VM0033 Production Implementation Analysis

    Note for Readers: This section provides an detailed analysis of VM0033 calculation implementation in Guardian's customLogicBlock. It's intended for developers who need to understand, write, or maintain VM0033. You can skip this section if you only need to understand the basic customLogicBlock concepts.

    This deep dive examines the complete production implementation of VM0033 tidal wetland restoration calculations in Guardian, using the and as our reference implementations.

    Complete VM0033 Production Code Architecture

    The 1261-line er-calculations.js contains 25+ interconnected functions implementing the full VM0033 methodology. Here's the complete function catalog mapped to test artifact worksheets:

    Core Architecture Overview

    Test Artifact Mapping

    Each function maps directly to specific data models defined within VM0033_Allcot_Test_Case_Artifact.xlsx:

    • ProjectBoundary (27x13) → getProjectBoundaryValue(), processInstance() boundary logic

    • QuantificationApproach (8x22) → getQuantificationValue(), SOC approach selection

    • StratumLevelInput + UI Req (49x29) → All stratum processing functions

    • MonitoringPeriodInputs (158x8) → processMonitoringSubmergence(), monitoring functions

    Section 3: Temporal Boundary System (Lines 181-350)

    Peat and Soil Depletion Time Calculations

    VM0033 calculates when carbon pools will be depleted to determine crediting periods. This maps directly to the 5.1_TemporalBoundary worksheet (36x24) in our test artifact.

    calculatePDTSDT() - Temporal Boundary Implementation (Lines 181-286)

    This function implements VM0033 Section 5.1 equations for calculating Peat Depletion Time (PDT) and Soil organic carbon Depletion Time (SDT):

    Temporal Boundary Helper Functions (Lines 288-350)

    These functions provide stratum-specific temporal boundary access:

    100-Year Carbon Projection Functions (Lines 312-350)

    These functions calculate carbon coverage over 100-year projections:

    Test Artifact Cross-Reference:

    • The temporal boundary calculations map to 5.1_TemporalBoundary worksheet rows 5-36

    • PDT calculations use peat depth and loss rates from StratumLevelInput columns M-P

    • SDT calculations use soil characteristics from StratumLevelInput columns Q-T

    • 100-year projections cross-reference 5.2.4_Ineligible wetland areas

    This temporal boundary system determines:

    1. When carbon pools will be depleted in the baseline scenario

    2. How long emission reductions can be credited for each stratum

    3. Which calculation approach to use (total stock vs stock loss)

    4. The temporal scope for SOC_MAX calculations

    Section 4: SOC Calculation Approaches (Lines 352-516)

    Two Ways to Calculate Soil Organic Carbon Benefits

    VM0033 offers two approaches for calculating soil organic carbon benefits. Both map to the 5.2.4_Ineligible wetland areas worksheet (47x30) in our test artifact.

    totalStockApproach() - Compare 100-Year Carbon Stocks (Lines 352-458)

    This approach compares total carbon stocks at 100 years between baseline and project scenarios:

    stockLossApproach() - Compare Carbon Loss Rates (Lines 461-506)

    This approach compares carbon loss rates over 100 years:

    SOC_MAX_calculation() - Approach Selector (Lines 508-514)

    This function selects which approach to use and calculates the final SOC_MAX value:

    Test Artifact Cross-Reference:

    • SOC calculations map to 5.2.4_Ineligible wetland areas worksheet columns A-AD

    • Total stock approach uses 100-year projections from columns B-H

    • Stock loss approach uses carbon loss rates from columns I-O

    • Both approaches feed into SOC_MAX value in column AD

    Section 1: Monitoring and Submergence Processing (Lines 39-94)

    Processing Time-Series Monitoring Data

    VM0033 tracks wetland submergence over time to calculate biomass changes. This maps to the MonitoringPeriodInputs worksheet (158x8) in our test artifact.

    processMonitoringSubmergence() - Submergence Monitoring Engine (Lines 39-69)

    This function processes submergence measurements across monitoring years and calculates biomass deltas:

    getDeltaCBSLAGBiomassForStratumAndYear() - Biomass Delta Lookup (Lines 71-91)

    This function retrieves biomass delta values for specific stratum and year combinations:

    Test Artifact Cross-Reference:

    • Submergence data maps to MonitoringPeriodInputs worksheet columns A-H

    • is_submerged values from column B

    • submergence_T periods from column C

    Section 2: Specialized Calculator Functions (Lines 95-180)

    Allocation Deductions and VCU Change Calculations

    These functions handle allocation deductions and VCU change calculations between monitoring periods.

    Allocation Deduction Functions (Lines 95-137)

    VM0033 requires allocation deductions for certain soil types and approaches:

    GHG Emission Getter Functions (Lines 140-169)

    These functions safely retrieve emission values by year:

    VCU Change Calculation Functions (Lines 170-179)

    These functions calculate VCU changes between monitoring periods:

    Test Artifact Cross-Reference:

    • Allocation deductions map to 8.1BaselineEmissions and 8.2ProjectEmissions allocation columns

    • VCU change calculations feed into 8.5NetERR worksheet VCU change columns M-P

    • Fire reduction premiums cross-reference FireReductionPremium + UI Req worksheet

    Section 8: Complete processInstance Orchestration (Lines 1126-1241)

    The Master Controller: How All 25+ Functions Work Together

    The processInstance() function is where the entire VM0033 methodology comes together. It orchestrates all the functions we've covered and maps to multiple test artifact worksheets. This is the production-level implementation that processes a complete project instance.

    Parameter Extraction Phase (Lines 1126-1184)

    The function starts by extracting parameters from every section of the Guardian document:

    Monitoring Data Processing Phase (Lines 1185-1221)

    Next, the function processes monitoring period inputs:

    Calculation Orchestration Phase (Lines 1221-1241)

    Finally, the function orchestrates all the calculations in the correct order:

    Test Artifact Cross-Reference:

    • ProjectBoundary worksheet → Project boundary parameter extraction (lines 1132-1159)

    • QuantificationApproach worksheet → Quantification approach parameters (lines 1162-1170)

    • IndividualParameters worksheet → Individual parameter extraction (lines 1173-1184)

    • MonitoringPeriodInputs worksheet → Monitoring data processing (lines 1186-1200)

    This orchestration demonstrates production-level implementation where:

    1. Parameter extraction is conditional - only extract what you need

    2. Calculation order matters - temporal boundaries before emissions, emissions before VCUs

    3. Every major worksheet in the test artifact maps to specific code sections

    4. Defensive programming - safe defaults and conditional logic throughout

    Section 9: Entry Point and Final Integration (Lines 1243-1261)

    The calc() Function - Guardian's Entry Point

    The calc() function is Guardian's entry point for customLogicBlock execution. It processes multiple project instances and calculates total VCUs:

    Section 5: Complete processBaselineEmissions Implementation (Lines 517-713)

    The 200-Line Baseline Calculation Engine

    This is the production implementation that processes VM0033 baseline emissions, mapping directly to the 8.1BaselineEmissions worksheet (158x84) in our test artifact.

    Key Production Features:

    • AR Tool Integration - integration with AR Tool 14 (afforestation) and AR Tool 05 (fuel)

    • Temporal Boundary Application - PDT/SDT constraints applied to actual emission calculations

    • Submergence Integration - Monitoring data affects biomass calculations

    • Multiple Calculation Methods - Field data, proxies, IPCC factors handled

    Each function processes multi-dimensional calculations across temporal and spatial boundaries

    Baseline Emissions Processing

    Let's examine the baseline emissions calculation in detail, cross-referencing with test artifact data:

    1. Temporal Boundary Calculations - PDT/SDT Implementation

    From er-calculations.js:1145-1220, the calculatePDTSDT function establishes critical temporal boundaries required by VM0033 methodology:

    This corresponds directly to the 5.1_TemporalBoundary worksheet in our test artifact (36x24 dimensions), which contains:

    • Peat thickness measurements (cm) - Column C in test data

    • Subsidence rates (cm/year) - Column D in test data

    • Calculated PDT values for each stratum - Column E in test data

    • SOC depletion time calculations - Column F in test data

    2. Fire Emissions Processing with Multi-Pool Carbon Dynamics

    The fire emissions processing demonstrates temporal modeling across multiple carbon pools:

    This implementation maps precisely to the 8.1BaselineEmissions worksheet rows handling fire emission calculations, which include:

    • Fire area data (hectares) - Columns K-M in test data

    • Above-ground biomass (tC/ha) - Columns N-P in test data

    • Below-ground biomass (tC/ha) - Columns Q-S in test data

    • Combustion factors - Columns T-V in test data

    3. Soil Carbon Stock Approaches Implementation

    The implementation handles two distinct soil carbon quantification approaches as specified in VM0033 Section 5.2:

    Project Emissions Processing

    Project emissions represent the "with project" scenario and involve restoration activity modeling:

    1. Project Emissions Calculation

    From er-calculations.js:712-785, project emissions account for various restoration phases:

    This corresponds to the 8.2ProjectEmissions worksheet (158x83 dimensions) containing:

    • Machinery fuel consumption data - Columns F-H in test data

    • Transportation emission factors - Columns I-K in test data

    • Restoration activity schedules - Columns L-N in test data

    • Equipment operation parameters - Columns O-Q in test data

    2. Soil GHG Emissions Under Restored Conditions

    The project scenario accounts for altered soil GHG emissions under restored wetland conditions:

    Net Emission Reductions (NER) with Uncertainty Handling

    The final stage calculates creditable emission reductions with uncertainty and buffer deductions:

    1. Multi-Component NER Calculation Engine

    From er-calculations.js:786-849, the net emission reduction calculation:

    2. Uncertainty and Buffer Deduction Framework

    The implementation applies uncertainty and buffer deductions as required by VM0033:

    This maps directly to the 8.5NetERR worksheet (53x23 dimensions) which contains:

    • Annual emission reduction calculations - Columns C-E in test data

    • Uncertainty percentage applications - Columns F-H in test data

    • Buffer percentage deductions - Columns I-K in test data

    • NERRWE cap applications - Columns L-N in test data

    Section 6: Complete processProjectEmissions Implementation (Lines 715-926)

    The processProjectEmissions function calculates the project scenario emissions. It follows a parallel structure to baseline processing but applies project-specific parameters.

    AR Tool Results Integration

    The function begins by extracting AR Tool results for each stratum:

    This corresponds to the 6.2ARTool14ProjectData worksheet (2x4 dimensions) where AR Tool 14 calculates carbon stock changes in:

    • Tree biomass - Column C in test data

    • Shrub biomass - Column D in test data

    And 6.4ARTool5ProjectData worksheet (43x4 dimensions) where AR Tool 05 calculates fossil fuel consumption for project machinery and operations.

    Biomass Application Logic

    The function includes conditional logic for biomass components:

    This checks stratum configuration flags to determine which biomass pools should be included in calculations. The corresponding test data in 7.2ProjectScenarioData worksheet (43x28 dimensions) shows these boolean flags in columns H-J.

    Project Scenario Soil Emissions

    The soil emissions calculation follows the same three-method approach as baseline but applies project scenario parameters:

    This maps to 7.2ProjectScenarioData columns K-M which contain project scenario soil carbon change data calculated using the same methods as baseline but with project-specific parameters.

    Non-CO2 Gas Calculations

    The function handles CH4 and N2O emissions from soil using project-specific approaches:

    This corresponds to columns N-P in 7.2ProjectScenarioData where CH4 emissions are calculated using project-specific approaches and emission factors.

    Prescribed Burning Calculations

    The function includes specialized calculations for prescribed burning activities:

    This calculates emissions from biomass burning using emission factors for N2O and CH4, converted to CO2 equivalent using Global Warming Potentials. The calculations use the Math.pow(10, -6) conversion factor for unit consistency. Test data in columns S-U of 7.2ProjectScenarioData validate these burning emission calculations.

    Annual Aggregation

    The function aggregates all emission components for each monitoring year:

    This produces the annual project scenario emissions that feed into net emission reduction calculations. The final aggregation creates cumulative totals across all monitoring years using the reduce operations.

    The function outputs correspond to 7.3ProjectScenarioGHGEmissions worksheet (43x7 dimensions) which contains:

    • Annual biomass emission changes - Column C

    • Annual soil emissions - Column D

    • Annual fuel consumption emissions - Column E

    • Annual burning emissions - Column F

    Section 7: Complete processNETERR Implementation (Lines 927-1118)

    The processNETERR function calculates the net emission reductions for each monitoring year. This function brings together baseline and project scenario results to determine final creditable volumes.

    Baseline and Project Aggregation

    The function begins by aggregating baseline and project scenario results across all strata for each monitoring year:

    This aggregation corresponds to 8.1NetERRCoreData worksheet (43x8 dimensions) where baseline and project scenario emissions are aggregated across all strata to produce project-level totals for each monitoring year.

    Cumulative Calculations

    The function maintains cumulative sums across monitoring years using running totals:

    This produces cumulative emission totals that are essential for stock loss approach calculations and buffer pool management. Test data in columns C-F of 8.1NetERRCoreData shows these cumulative progressions.

    Stock Loss Deduction Logic

    The function implements stock loss approach deductions when enabled:

    This logic deducts any emissions above the maximum soil organic carbon limit (SOC_MAX) to ensure conservative crediting. The calculation corresponds to column G in 8.1NetERRCoreData which shows stock loss deductions applied when cumulative differences exceed the methodology limits.

    Fire Reduction Premium Integration

    The function includes optional fire reduction premium credits:

    This applies fire reduction credits based on documented fire management activities. Test data in column H of 8.1NetERRCoreData shows annual fire reduction premium applications.

    NERRWE Calculation

    The core net emission reduction calculation combines all components:

    This formula represents the fundamental VM0033 equation: Net Emission Reductions = Baseline Emissions + Project Emissions + Fire Reduction Premium - Leakage - Stock Loss Deductions.

    Capping Logic

    The function applies optional annual emission reduction caps:

    This ensures annual emission reductions don't exceed methodology-defined limits. Test data in 8.2NetERRAdjustments worksheet (43x6 dimensions) shows the application of caps in column C.

    Uncertainty Adjustments

    The function applies measurement and model uncertainties:

    This incorporates both positive (allowable) and negative (model error) uncertainty adjustments. The calculation corresponds to column D in 8.2NetERRAdjustments where uncertainty percentages are applied to final emission reductions.

    Buffer Pool Calculations

    The function calculates buffer pool deductions using an incremental approach:

    This calculates buffer deductions based on incremental changes between monitoring years rather than applying the buffer percentage to total accumulations. Test data in 8.3NetERRBufferDeduction worksheet (43x6 dimensions) validates these buffer calculations.

    Final VCU Calculations

    The function produces final Verified Carbon Units:

    This produces the final creditable carbon units for each monitoring year. The outputs correspond to 8.4NetERRFinalCalculations worksheet (43x6 dimensions) which contains:

    • Gross emission reductions - Column C

    • Uncertainty-adjusted reductions - Column D

    • Buffer deductions - Column E

    • Final VCU issuance - Column F

    The function establishes total VCU quantities that determine final carbon credit issuance amounts for the project.

    Chapter Summary

    You've learned how to translate scientific equations from environmental methodologies into executable code that produces verified carbon credits. The key principles:

    • Equation-to-Code Translation - Every methodology equation becomes a function in your customLogicBlock

    • Scientific Precision Required - Use defensive programming to handle edge cases while maintaining mathematical accuracy

    • Allcot Test Artifact is Your Benchmark - Your code must reproduce manual calculations exactly for scientific validity

    • Field Access Utilities

    Your equation implementations are the foundation of environmental credit integrity. When coded properly, they transform scientific methodology equations into verified carbon units that represent real, measured emission reductions from restoration projects.

    The next chapter explores Formula Linked Definitions (FLDs) for managing parameter relationships, and Chapter 21 covers comprehensive testing to ensure your calculations are production-ready.


    2023

    0.29

    2033

    110,576.46

    2043

    120,929.68

    2053

    72,200.65

    2024

    4.31

    2034

    115,770.40

    2044

    118,625.12

    2054

    69,072.40

    2025

    1,307.66

    2035

    119,502.79

    2045

    115,610.59

    2055

    66,174.64

    5.1_TemporalBoundary (36x24) → calculatePDTSDT(), temporal boundary functions

  • 8.1BaselineEmissions (158x84) → processBaselineEmissions() complete logic

  • 8.2ProjectEmissions (158x83) → processProjectEmissions() complete logic

  • 8.5NetERR (53x23) → processNETERR() and all VCU calculation functions

  • worksheet
    area_submerged_percentage
    from column D
  • Calculated delta_C_BSL_agbiomass_i_t values stored in column H

  • IF Wood Product Is Included worksheet → Wood product data (lines 1211-1218)

  • All calculation worksheets → Orchestrated function calls (lines 1221-1240)

  • Defensive Programming - Safe defaults and null checks throughout

  • Year-level Aggregation - Proper summing across strata and time

  • Root combustion factors - Columns W-Y in test data

    Final creditable volumes - Columns O-Q in test data

    Total annual project emissions - Column G
    enable clean implementation of complex mathematical formulas
  • Both JavaScript and Python supported - choose the language that best implements your equations

  • 2022

    0.01

    2032

    104,012.50

    2042

    122,680.75

    2052

    equation implementations
    test input data
    Allcot validation spreadsheet
    customLogicBlock
    er-calculations.js
    Allcot test artifact
    final-PDD-vc.json
    VM0033 Allcot Test Case Artifact
    er-calculations.js
    customLogicBlock in VM0033's PDD submission flow

    75,559.80

    // Guardian customLogicBlock structure - this is your equation implementation workspace
    {
      "blockType": "customLogicBlock",
      "tag": "methodology_equation_implementation",
      "expression": "(function calc() {\n  // Implement methodology equations here\n  const documents = arguments[0] || [];\n  // Process monitoring data through scientific formulas\n  return calculatedResults;\n})"
    }
    // Real document structure from final-PDD-vc.json
    const document = {
      document: {
        credentialSubject: [
          {
            // Real project information
            project_cert_type: "CCB v3.0 & VCS v4.4",
            project_details: {
              registry_vcs: {
                vcs_project_description: "ABC Blue Carbon Mangrove Project..."
              }
            },
    
            // The data your calculations need
            project_data_per_instance: [{
              project_instance: {
                // Baseline emissions data
                baseline_emissions: { /* monitoring data */ },
                // Project emissions data
                project_emissions: { /* monitoring data */ },
                // Where your calculations go
                net_ERR: {
                  total_VCU_per_instance: 0  // You'll calculate this!
                }
              }
            }],
    
            // Project settings and parameters
            project_boundary: { /* boundary conditions */ },
            individual_parameters: { /* methodology parameters */ }
          }
        ]
      }
    };
    // These utility functions handle the complexity for you
    function getProjectBoundaryValue(data, key) {
        return data.project_boundary_baseline_scenario?.[key]?.included ??
            data.project_boundary_project_scenario?.[key]?.included ??
            undefined;
    }
    
    function getIndividualParam(data, key) {
        return data?.individual_parameters?.[key] ?? undefined;
    }
    
    function getMonitoringValue(data, key) {
        return data?.monitoring_period_inputs?.[key] ?? undefined;
    }
    
    // Using these in your calculations
    function processInstance(instance, project_boundary) {
        const data = instance.project_instance;
    
        // Get project settings cleanly
        const BaselineSoil = getProjectBoundaryValue(project_boundary, 'baseline_soil');
    
        // Get methodology parameters
        const GWP_CH4 = getIndividualParam(data, 'gwp_ch4');
    
        // Get monitoring data
        const SubmergenceData = getMonitoringValue(data, 'submergence_monitoring_data');
    }
    // Main entry point - this is where your calculations begin
    function calc() {
        // Guardian passes documents as arguments[0]
        const documents = arguments[0] || [];
        const document = documents[0].document;
        const creds = document.credentialSubject;
    
        let totalVcus = 0;
    
        // Process each project instance (some projects have multiple sites)
        for (const cred of creds) {
            for (const instance of cred.project_data_per_instance) {
                // This is where the real work happens
                processInstance(instance, cred.project_boundary);
    
                // Add up the verified carbon units
                totalVcus += instance.project_instance.net_ERR.total_VCU_per_instance;
            }
    
            // Set the total for this credential
            cred.total_vcus = totalVcus;
        }
    
        // Guardian expects this callback
        done(adjustValues(document.credentialSubject[0]));
    }
    function processInstance(instance, project_boundary) {
        const data = instance.project_instance;
    
        // Extract key parameters you'll need
        const crediting_period = getIndividualParam(data, 'crediting_period') || 40;
        const GWP_CH4 = getIndividualParam(data, 'gwp_ch4') || 28;
        const GWP_N2O = getIndividualParam(data, 'gwp_n2o') || 265;
    
        // Get project boundary settings
        const baseline_soil_CH4 = getProjectBoundaryValue(project_boundary, 'baseline_soil_ch4');
        const project_soil_CH4 = getProjectBoundaryValue(project_boundary, 'project_soil_ch4');
    
        // Process the main calculations
        processBaselineEmissions(data.baseline_emissions, /* parameters */);
        processProjectEmissions(data.project_emissions, /* parameters */);
        processNETERR(data.baseline_emissions, data.project_emissions, data.net_ERR, /* parameters */);
    }
    Methodology Equation: GHGBSL,soil,CO₂,i,t = -(44/12) × ΔCBSL,soil,i,t × Ai,t
    Code Implementation: asl.GHGBSL_soil_CO2_i_t = -(3.6666666666666665 * asl.delta_C_BSL_soil_i_t)
    function processBaselineEmissions(baseline, crediting_period, baseline_soil_CH4,
        soil_CH4_approach, GWP_CH4, baseline_soil_N2O, soil_N2O_approach, GWP_N2O) {
    
        // Process each monitoring year
        for (const yearRec of baseline.yearly_data_for_baseline_GHG_emissions ?? []) {
            const { year_t } = yearRec;
    
            // Process each stratum (different habitat types) within the year
            for (const stratum of yearRec.annual_stratum_parameters ?? []) {
                const { stratum_i } = stratum;
                const sc = stratum.stratum_characteristics ?? {};
                const asl = stratum.annual_stratum_level_parameters ?? {};
    
                // Here's where AR Tool calculations integrate
                asl.delta_CTREE_BSL_i_t_ar_tool_14 = stratum.ar_tool_14?.delta_C_TREE ?? 0;
                asl.delta_CSHRUB_BSL_i_t_ar_tool_14 = stratum.ar_tool_14?.delta_C_SHRUB ?? 0;
    
                // Calculate biomass changes (trees and shrubs)
                const const_12_by_44 = 0.2727272727272727; // Carbon conversion factor
                asl.delta_C_BSL_tree_or_shrub_i_t = const_12_by_44 *
                    (asl.delta_CTREE_BSL_i_t_ar_tool_14 + asl.delta_CSHRUB_BSL_i_t_ar_tool_14);
    
                // Calculate soil CO2 emissions based on methodology approach
                if (asl.is_soil) {
                    const method = sc.co2_emissions_from_soil;
    
                    switch (method) {
                        case "Field-collected data":
                            // Direct measurements from field
                            asl.GHGBSL_soil_CO2_i_t = -(3.6666666666666665 * asl.delta_C_BSL_soil_i_t);
                            break;
                        case "Proxies":
                            // Using proxy data when direct measurement isn't available
                            asl.GHGBSL_soil_CO2_i_t = asl.GHG_emission_proxy_GHGBSL_soil_CO2_i_t;
                            break;
                        default:
                            // Sum of individual emission sources
                            asl.GHGBSL_soil_CO2_i_t =
                                (asl.GHGBSL_insitu_CO2_i_t ?? 0) +
                                (asl.GHGBSL_eroded_CO2_i_t ?? 0) +
                                (asl.GHGBSL_excav_CO2_i_t ?? 0);
                    }
                } else {
                    asl.GHGBSL_soil_CO2_i_t = 0;
                }
    
                // Calculate CH4 emissions if included in project boundary
                if (baseline_soil_CH4) {
                    switch (soil_CH4_approach) {
                        case "IPCC emission factors":
                            asl.GHGBSL_soil_CH4_i_t = asl.IPCC_emission_factor_ch4_BSL * GWP_CH4;
                            break;
                        case "Proxies":
                            asl.GHGBSL_soil_CH4_i_t = asl.GHG_emission_proxy_ch4_BSL * GWP_CH4;
                            break;
                        default:
                            asl.GHGBSL_soil_CH4_i_t = asl.CH4_BSL_soil_i_t * GWP_CH4;
                    }
                } else {
                    asl.GHGBSL_soil_CH4_i_t = 0;
                }
    
                // Total baseline emissions per stratum
                asl.GHGBSL_soil_i_t = asl.A_i_t * (
                    asl.GHGBSL_soil_CO2_i_t -
                    asl.Deduction_alloch +
                    asl.GHGBSL_soil_CH4_i_t +
                    asl.GHGBSL_soil_N2O_i_t
                );
            }
    
            // Aggregate across all strata for this year
            const sum_delta_C_BSL_biomass = yearRec.annual_stratum_parameters
                .reduce((acc, s) => acc + (Number(s.annual_stratum_level_parameters
                    .delta_C_BSL_biomass_i_t) || 0), 0);
    
            yearRec.GHG_BSL_biomass = -(sum_delta_C_BSL_biomass * 3.6666666666666665);
        }
    }
    Methodology Equation: ΔCWPS,biomass,i,t = ΔCWPS,tree or shrub,i,t + ΔCWPS,herb,i,t
    Code Implementation: asl.delta_C_WPS_biomass_i_t = asl.delta_C_WPS_tree_or_shrub_i_t + asl.delta_C_WPS_herb_i_t
    function processProjectEmissions(project, project_soil_CH4, project_soil_CH4_approach,
        GWP_CH4, project_soil_N2O, soil_N2O_approach, GWP_N2O) {
    
        for (const yearRec of project.yearly_data_for_project_GHG_emissions ?? []) {
            for (const stratum of yearRec.annual_stratum_parameters ?? []) {
                const asl = stratum.annual_stratum_level_parameters ?? {};
                const sc = stratum.stratum_characteristics ?? {};
    
                // AR Tool calculations for project scenario
                asl.delta_C_TREE_PROJ_i_t_ar_tool_14 = stratum.ar_tool_14?.delta_C_TREE ?? 0;
                asl.delta_C_SHRUB_PROJ_i_t_ar_tool_14 = stratum.ar_tool_14?.delta_C_SHRUB ?? 0;
    
                // Project biomass calculations (usually positive - sequestration!)
                asl.delta_C_WPS_tree_or_shrub_i_t = 0.2727272727272727 *
                    (asl.delta_C_TREE_PROJ_i_t_ar_tool_14 + asl.delta_C_SHRUB_PROJ_i_t_ar_tool_14);
    
                asl.delta_C_WPS_biomass_i_t =
                    asl.delta_C_WPS_tree_or_shrub_i_t + asl.delta_C_WPS_herb_i_t;
    
                // Project soil emissions (usually much lower than baseline)
                if (asl.is_soil) {
                    const method = sc.co2_emissions_from_soil;
    
                    switch (method) {
                        case "Field-collected data":
                            asl.GHGWPS_soil_CO2_i_t = -(3.6666666666666665 * asl.delta_C_WPS_soil_i_t);
                            break;
                        case "Proxies":
                            asl.GHGWPS_soil_CO2_i_t = asl.GHG_emission_proxy_GHGWPS_soil_CO2_i_t;
                            break;
                        default:
                            asl.GHGWPS_soil_CO2_i_t =
                                (asl.GHGWPS_insitu_CO2_i_t ?? 0) +
                                (asl.GHGWPS_eroded_CO2_i_t ?? 0) +
                                (asl.GHGWPS_excav_CO2_i_t ?? 0);
                    }
                }
    
                // Total project soil emissions per stratum
                asl.GHGWPS_soil_i_t = asl.A_i_t * (
                    asl.GHGWPS_soil_CO2_i_t -
                    asl.Deduction_alloch_WPS +
                    asl.GHGWPS_soil_CH4_i_t +
                    asl.GHGWPS_soil_N2O_i_t
                );
            }
    
            // Year-level project emissions aggregation
            const sum_delta_C_WPS_biomass = yearRec.annual_stratum_parameters.reduce(
                (acc, s) => acc + (Number(s.annual_stratum_level_parameters.delta_C_WPS_biomass_i_t) || 0), 0);
    
            yearRec.GHG_WPS_biomass = -(sum_delta_C_WPS_biomass * 3.6666666666666665);
        }
    }
    Methodology Equation: NERRₜ = ΣGHGᵦₛₗ,ₜ - ΣGHGwₚₛ,ₜ - ΣGHGₗₖ,ₜ - ΣGHGwₚₛ,soil deduction,ₜ + FRPₜ
    Code Implementation: rec.NERRWE = getGHGBSL(...) + getGHGWPS(...) + rec.FRP - rec.GHG_LK - rec.GHG_WPS_soil_deduction
    function processNETERR(baseline, project, netErrData, buffer_percentage, allowable_uncert, NERError) {
    
        // Combine baseline and project emissions by year
        const perYear = new Map();
    
        // Process baseline emissions
        for (const yr of baseline.yearly_data_for_baseline_GHG_emissions ?? []) {
            const total = (yr.annual_stratum_parameters ?? []).reduce((a, s) =>
                a + +(s.annual_stratum_level_parameters?.GHGBSL_soil_CO2_i_t ?? 0) *
                    +(s.annual_stratum_level_parameters?.A_i_t ?? 0), 0);
    
            perYear.set(yr.year_t, {
                year_t: yr.year_t,
                sumation_GHG_BSL_soil_CO2_i_A_i: total,
                sumation_GHG_WPS_soil_CO2_i_A_i: 0
            });
        }
    
        // Process project emissions
        for (const yr of project.yearly_data_for_project_GHG_emissions ?? []) {
            const total = (yr.annual_stratum_parameters ?? []).reduce((a, s) =>
                a + +(s.annual_stratum_level_parameters?.GHGWPS_soil_CO2_i_t ?? 0) *
                    +(s.annual_stratum_level_parameters?.A_i_t ?? 0), 0);
    
            if (perYear.has(yr.year_t)) {
                perYear.get(yr.year_t).sumation_GHG_WPS_soil_CO2_i_A_i = total;
            }
        }
    
        // Calculate annual net emission reductions
        netErrData.net_ERR_calculation_per_year = [...perYear.values()]
            .sort((a, b) => a.year_t - b.year_t)
            .map(rec => {
                // NERRWE calculation (Net Emission Reduction from Wetland Enhancement)
                rec.NERRWE = getGHGBSL(baseline.yearly_data_for_baseline_GHG_emissions, rec.year_t) +
                            getGHGWPS(project.yearly_data_for_project_GHG_emissions, rec.year_t) +
                            rec.FRP - rec.GHG_LK - rec.GHG_WPS_soil_deduction;
    
                // Apply methodology caps if configured
                rec.NERRWE_capped = rec.NERRWE;
                rec.NER_t = rec.NERRWE;
    
                // Apply uncertainty and error adjustments (this is crucial!)
                rec.adjusted_NER_t = rec.NER_t * (1 - NERError + allowable_uncert);
    
                return rec;
            });
    
        // Calculate buffer deductions and final VCUs
        const netErrArr = netErrData.net_ERR_calculation_per_year;
    
        netErrArr.forEach((rec, idx, arr) => {
            if (idx === 0) {
                // First year calculation
                rec.buffer_deduction = rec.NER_stock_t * buffer_percentage;
                rec.VCU = rec.adjusted_NER_t - rec.buffer_deduction;
            } else {
                // Subsequent years account for previous calculations
                const prevRec = arr[idx - 1];
                rec.buffer_deduction = calculateNetERRChange(
                    rec.adjusted_NER_t, prevRec.adjusted_NER_t,
                    rec.NER_stock_t, prevRec.NER_stock_t, buffer_percentage);
                rec.VCU = calculateNetVCU(rec.adjusted_NER_t, prevRec.adjusted_NER_t, rec.buffer_deduction);
            }
        });
    
        // Calculate total VCUs for this project instance
        netErrData.total_VCU_per_instance = netErrArr.reduce((sum, rec) => sum + (rec.VCU || 0), 0);
    }
    // Safe number conversion with defaults
    function safeNumber(value, defaultValue = 0) {
        const num = Number(value);
        return isNaN(num) || !isFinite(num) ? defaultValue : num;
    }
    
    // Safe array access
    const yearlyData = baseline.yearly_data_for_baseline_GHG_emissions ?? [];
    const stratumParams = yearRec.annual_stratum_parameters ?? [];
    
    // Division by zero protection
    function calculateRate(numerator, denominator) {
        if (denominator === 0 || denominator === null || denominator === undefined) {
            return 0; // Or whatever makes sense for your methodology
        }
        return numerator / denominator;
    }
    
    // Range validation
    function validateEmissionFactor(value, min = 0, max = 1000) {
        const num = safeNumber(value);
        if (num < min || num > max) {
            console.warn(`Emission factor ${num} outside expected range [${min}, ${max}]`);
            return Math.max(min, Math.min(max, num)); // Clamp to valid range
        }
        return num;
    }
    function processInstanceSafely(instance, project_boundary) {
        try {
            const data = instance.project_instance;
    
            // Validate required data exists
            if (!data.baseline_emissions || !data.project_emissions) {
                throw new Error("Missing required emissions data");
            }
    
            // Process with validation
            processInstance(instance, project_boundary);
    
            // Validate results make sense
            const totalVCU = data.net_ERR.total_VCU_per_instance;
            if (totalVCU < 0) {
                console.warn("Negative VCUs calculated - check input data");
            }
    
        } catch (error) {
            console.error(`Error processing instance: ${error.message}`);
            // Set safe defaults rather than crashing
            instance.project_instance.net_ERR.total_VCU_per_instance = 0;
        }
    }
    // Validation against Allcot test artifact results
    // These are the manually calculated results from the methodology spreadsheet
    const allcotValidationBenchmark = {
        2022: { VCU: 0.01 },        // Hand-calculated using VM0033 equations
        2023: { VCU: 0.29 },        // Each value validated by methodology experts
        2024: { VCU: 4.31 },
        2025: { VCU: 1307.66 },
        // ... complete 40-year projection
        total_VCU_40_years: 2861923.07  // Sum of all manually calculated VCUs
    };
    
    function validateEquationImplementation(calculatedResults) {
        let totalCalculated = 0;
        let validationReport = {
            passedTests: 0,
            totalTests: 0,
            maxError: 0,
            scientificallyValid: true
        };
    
        // Compare each year's calculation against manual spreadsheet results
        for (const yearResult of calculatedResults.net_ERR_calculation_per_year) {
            const year = yearResult.year_t;
            const calculatedVCU = yearResult.VCU;
            const benchmarkVCU = allcotValidationBenchmark[year]?.VCU;
    
            if (benchmarkVCU !== undefined) {
                const absoluteError = Math.abs(calculatedVCU - benchmarkVCU);
                const relativeError = benchmarkVCU !== 0 ? (absoluteError / benchmarkVCU * 100) : 0;
    
                validationReport.totalTests++;
                validationReport.maxError = Math.max(validationReport.maxError, relativeError);
    
                if (relativeError < 0.01) { // High precision: < 0.01% error
                    validationReport.passedTests++;
                } else {
                    console.warn(`Equation validation failed for Year ${year}:`);
                    console.warn(`  Calculated: ${calculatedVCU.toFixed(6)}`);
                    console.warn(`  Expected (manual): ${benchmarkVCU.toFixed(6)}`);
                    console.warn(`  Error: ${relativeError.toFixed(6)}%`);
                    validationReport.scientificallyValid = false;
                }
            }
    
            totalCalculated += calculatedVCU;
        }
    
        // Validate total against manual calculation
        const totalError = Math.abs(totalCalculated - allcotValidationBenchmark.total_VCU_40_years) /
                          allcotValidationBenchmark.total_VCU_40_years * 100;
    
        console.log('=== VALIDATION REPORT ===');
        console.log(`Equation Implementation vs Manual Calculation:`);
        console.log(`  Tests Passed: ${validationReport.passedTests}/${validationReport.totalTests}`);
        console.log(`  Max Error: ${validationReport.maxError.toFixed(6)}%`);
        console.log(`  Total VCU Error: ${totalError.toFixed(6)}%`);
        console.log(`  Validation Status: ${validationReport.scientificallyValid ? 'VALID' : 'INVALID'}`);
    
        return validationReport.scientificallyValid && totalError < 0.001; // Must be < 0.001% error
    }
    def calc():
        """Main calculation function - Python version"""
        import sys
        documents = sys.argv[0] if len(sys.argv) > 0 else []
    
        if not documents:
            return {}
    
        document = documents[0]['document']
        creds = document['credentialSubject']
    
        total_vcus = 0
    
        for cred in creds:
            for instance in cred.get('project_data_per_instance', []):
                process_instance(instance, cred.get('project_boundary', {}))
                total_vcus += instance['project_instance']['net_ERR'].get('total_VCU_per_instance', 0)
    
            cred['total_vcus'] = total_vcus
    
        return document['credentialSubject'][0]
    
    def process_baseline_emissions(baseline, **kwargs):
        """Process baseline emissions - Python version"""
        gwp_ch4 = kwargs.get('GWP_CH4', 28)
    
        for year_rec in baseline.get('yearly_data_for_baseline_GHG_emissions', []):
            year_t = year_rec['year_t']
    
            for stratum in year_rec.get('annual_stratum_parameters', []):
                asl = stratum.get('annual_stratum_level_parameters', {})
    
                # Calculate emissions with safe defaults
                ch4_baseline = asl.get('CH4_BSL_soil_i_t', 0)
                asl['GHGBSL_soil_CH4_i_t'] = ch4_baseline * gwp_ch4
    debug('Processing year:', year_t);
    debug('Baseline emissions:', asl.GHGBSL_soil_CO2_i_t);
    debug('Project emissions:', asl.GHGWPS_soil_CO2_i_t);
    // Quick test of a calculation function
    function testSoilEmissions() {
        const testData = { delta_C_BSL_soil_i_t: 100, A_i_t: 10 };
        const result = calculateSoilCO2Emissions(testData);
        const expected = -(3.6666666666666665 * 100) * 10;
        debug('Test passed:', Math.abs(result - expected) < 0.01);
    }
    // VM0033 Production Implementation: 25+ Functions in 6 Major Categories
    
    // ── 1. DATA ACCESS UTILITIES (Lines 7-37) ──
    adjustValues()              // Document post-processing
    getStartYear()             // Find earliest monitoring year
    getProjectBoundaryValue()  // Extract project boundary settings
    getQuantificationValue()   // Get quantification approach parameters
    getIndividualParam()       // Access individual methodology parameters
    getMonitoringValue()       // Extract monitoring period data
    getWoodProductValue()      // Access wood product parameters
    
    // ── 2. TEMPORAL BOUNDARY SYSTEM (Lines 39-350) ──
    processMonitoringSubmergence()           // Process submergence monitoring data
    getDeltaCBSLAGBiomassForStratumAndYear() // Biomass delta calculations across time
    calculatePDTSDT()                        // Peat & Soil Depletion Time calculations
    getEndPDTPerStratum()                   // Stratum-specific PDT boundaries
    getEndSDTPerStratum()                   // Stratum-specific SDT boundaries
    calculate_peat_strata_input_coverage_100_years()     // 100-year peat projections
    calculate_non_peat_strata_input_coverage_100_years() // 100-year mineral soil projections
    getCBSL_i_t0()                          // Initial baseline carbon stocks
    calculateRemainingPercentage()          // Remaining depletion percentages
    
    // ── 3. SOC CALCULATION APPROACHES (Lines 352-516) ──
    totalStockApproach()        // VM0033 Total Stock Approach (Section 5.2.1)
    stockLossApproach()         // VM0033 Stock Loss Approach (Section 5.2.2)
    SOC_MAX_calculation()       // Soil Organic Carbon maximum calculations
    
    // ── 4. EMISSION PROCESSING ENGINES (Lines 517-926) ──
    processBaselineEmissions()  // Complete baseline scenario processing
    processProjectEmissions()   // Complete project scenario processing
    processNETERR()            // Net emission reduction calculations
    
    // ── 5. SPECIALIZED CALCULATORS (Lines 95-180) ──
    computeDeductionAllochBaseline()  // Allocation deductions for baseline
    computeDeductionAllochProject()   // Allocation deductions for project
    getFireReductionPremiumPerYear()  // Fire reduction premium by year
    getGHGBSL/WPS/Biomass()          // GHG emission getters by type
    calculateNetERRChange()           // VCU change between monitoring periods
    calculateNetVCU()                // Net VCU calculations
    
    // ── 6. ORCHESTRATION & CONTROL (Lines 1121-1261) ──
    calculateTotalVCUPerInstance()    // Sum VCUs across monitoring periods
    processInstance()                 // Main instance processing orchestrator
    calc()                           // Entry point function
    // From er-calculations.js:181-286 - VM0033 temporal boundary calculation
    function calculatePDTSDT(baseline, isProjectQuantifyBSLReduction, temporalBoundary, crediting_period) {
        if (isProjectQuantifyBSLReduction) {
            // Work on earliest year for temporal boundary establishment
            const baselineEmissionsSorted = (baseline.yearly_data_for_baseline_GHG_emissions || [])
                .slice() // Prevent mutation of original array
                .sort((a, b) => a.year_t - b.year_t);
    
            if (!baselineEmissionsSorted.length) return;
    
            baselineEmissionsSorted[0].annual_stratum_parameters.forEach(stratum => {
                const sc = stratum.stratum_characteristics ?? {};
                const asl = stratum.annual_stratum_level_parameters ?? {};
    
                // Extract critical parameters from test artifact StratumLevelInput worksheet
                const {
                    soil_disturbance_type,        // From Column C in test data
                    drained_20_yr,               // From Column D in test data
                    significant_soil_erosion_as_non_peat_soil, // From Column E
                    RateCloss_BSL_i             // From Column F - soil carbon loss rate
                } = sc;
    
                let SDT = {};  // Soil organic carbon Depletion Time
                let PDT = {};  // Peat Depletion Time
    
                // VM0033 Equation 5.1.1 - Initial soil carbon calculation
                SDT.CBSL_i_t0 = (isProjectQuantifyBSLReduction && sc.is_project_quantify_BSL_reduction)
                    ? sc.depth_soil_i_t0 * sc.VC_I_mineral_soil_portion * 10  // Convert to tC/ha
                    : 0;
    
                // VM0033 Equation 5.1.2 - Soil Depletion Time calculation
                if (isProjectQuantifyBSLReduction && sc.is_project_quantify_BSL_reduction) {
                    if (significant_soil_erosion_as_non_peat_soil || drained_20_yr) {
                        // Immediate depletion scenarios
                        SDT.t_SDT_BSL_i = 0;
                    } else {
                        // Calculate remaining time after peat depletion
                        const duration = crediting_period - (sc.soil_type_t0 === 'Peatsoil'
                            ? (sc.depth_peat_i_t0 / sc.Ratepeatloss_BSL_i)  // Peat depletion duration
                            : 0
                        );
    
                        if (duration > 0) {
                            SDT.t_SDT_BSL_i = soil_disturbance_type === "Erosion"
                                ? 5  // Fixed 5-year erosion period per methodology
                                : (RateCloss_BSL_i !== 0 ? SDT.CBSL_i_t0 / RateCloss_BSL_i : 0);
                        }
                    }
                } else {
                    SDT.t_SDT_BSL_i = 0;
                }
    
                // VM0033 Equation 5.1.3 - Peat Depletion Time for peat soils
                if (sc.soil_type_t0 === 'Peatsoil' && sc.is_project_quantify_BSL_reduction) {
                    PDT.t_PDT_BSL_i = sc.depth_peat_i_t0 / sc.Ratepeatloss_BSL_i;  // Years until peat depleted
                    PDT.start_PDT = 0;                    // Peat depletion starts immediately
                    PDT.end_PDT = PDT.t_PDT_BSL_i;       // When peat is fully depleted
                } else {
                    // Non-peat soils have no peat depletion
                    PDT.t_PDT_BSL_i = 0;
                    PDT.start_PDT = 0;
                    PDT.end_PDT = 0;
                }
    
                // Coordinate PDT and SDT temporal boundaries
                SDT.start_PDT = PDT.start_PDT;
                SDT.end_PDT = Math.min(PDT.end_PDT, crediting_period);  // Cap at crediting period
    
                // Soil depletion starts after peat depletion ends
                if (SDT.t_SDT_BSL_i > 0) {
                    SDT.start_SDT = SDT.end_PDT;  // Start when peat depletion ends
                } else {
                    SDT.start_SDT = 0;           // No soil depletion
                }
    
                SDT.end_SDT = SDT.start_SDT + SDT.t_SDT_BSL_i;  // When soil is depleted
    
                // Store temporal boundary data for this stratum
                temporalBoundary.push({
                    stratum_i: stratum.stratum_i,
                    peat_depletion_time: {
                        "t_PDT_BSL_i": PDT.t_PDT_BSL_i,
                        "start_PDT": PDT.start_PDT,
                        "end_PDT": PDT.end_PDT,
                        // Guardian metadata for schema validation
                        type: temporalBoundary[0]?.peat_depletion_time?.type,
                        '@context': temporalBoundary[0]?.peat_depletion_time?.['@context'] ?? [],
                    },
                    soil_organic_carbon_depletion_time: {
                        "t_SDT_BSL_i": SDT.t_SDT_BSL_i,
                        'CBSL_i_t0': SDT.CBSL_i_t0,
                        "start_SDT": SDT.start_SDT,
                        "end_SDT": SDT.end_SDT,
                        "start_PDT": SDT.start_PDT,
                        "end_PDT": SDT.end_PDT,
                        type: temporalBoundary[0]?.soil_organic_carbon_depletion_time?.type,
                        '@context': temporalBoundary[0]?.soil_organic_carbon_depletion_time?.['@context'] ?? [],
                    },
                    type: temporalBoundary?.[0]?.type,
                    '@context': temporalBoundary?.[0]?.['@context'] ?? [],
                });
            });
    
            // Remove template element after processing
            temporalBoundary.shift();
        }
    }
    // From er-calculations.js:288-298 - Access PDT end time for specific stratum
    function getEndPDTPerStratum(temporal_boundary, stratum_i) {
        const stratumTemporalBoundary = temporal_boundary.find(
            (boundary) => boundary.stratum_i === stratum_i
        );
    
        if (stratumTemporalBoundary) {
            return stratumTemporalBoundary.soil_organic_carbon_depletion_time.end_PDT;
        }
    
        return 0;  // Default if no temporal boundary found
    }
    
    // From er-calculations.js:300-310 - Access SDT end time for specific stratum
    function getEndSDTPerStratum(temporal_boundary, stratum_i) {
        const stratumTemporalBoundary = temporal_boundary.find(
            (boundary) => boundary.stratum_i === stratum_i
        );
    
        if (stratumTemporalBoundary) {
            return stratumTemporalBoundary.soil_organic_carbon_depletion_time.end_SDT;
        }
    
        return 0;  // Default if no temporal boundary found
    }
    // From er-calculations.js:312-321 - Calculate peat carbon coverage over 100 years
    function calculate_peat_strata_input_coverage_100_years(data, strata) {
        const match = data.find(item => String(item.stratum_i) === String(strata));
        return match ? Number(match.peat_strata_input_coverage_100_years) || 0 : 0;
    }
    
    // From er-calculations.js:322-331 - Calculate mineral soil carbon coverage over 100 years
    function calculate_non_peat_strata_input_coverage_100_years(data, strata) {
        const match = data.find(item => String(item.stratum_i) === String(strata));
        return match ? Number(match.non_peat_strata_input_coverage_100_years) || 0 : 0;
    }
    
    // From er-calculations.js:332-338 - Get initial baseline carbon stock for stratum
    function getCBSL_i_t0(temporalBoundary = [], strata) {
        const match = temporalBoundary.find(item => String(item.stratum_i) === String(strata));
        return match ? Number(match.soil_organic_carbon_depletion_time.CBSL_i_t0) || 0 : 0;
    }
    
    // From er-calculations.js:340-349 - Calculate remaining carbon after depletion
    function calculateRemainingPercentage(match, D41) {
        if (match === 0) return 100;  // No depletion = 100% remaining
        if (D41 === 0) return 0;      // No carbon = 0% remaining
    
        const percentage = (D41 / match) * 100;
        return Math.min(percentage, 100);  // Cap at 100%
    }
    // From er-calculations.js:352-458 - Total Stock Approach implementation
    function totalStockApproach(
        baseline,
        total_stock_approach_parameters,
        peat_strata_input_coverage_100_years,
        non_peat_strata_input_coverage_100_years,
        temporal_boundary
    ) {
        let sumWPS = 0;   // Σ C_WPS_i_t100 × A_WPS_i_t100 (project carbon at 100 years)
        let sumBSL = 0;   // Σ C_BSL_i_t100 × A_BSL_i_t100 (baseline carbon at 100 years)
    
        // Process each stratum in the first-year baseline record
        baseline.yearly_data_for_baseline_GHG_emissions[0].annual_stratum_parameters
            .forEach((stratum) => {
                const { stratum_i } = stratum;
                const charac = stratum.stratum_characteristics ?? {};
    
                // Extract parameters with safe defaults (defensive programming)
                const depth_peat_i_t0 = Number(charac.depth_peat_i_t0) || 0;
                const VC_I_peat_portion = Number(charac.VC_I_peat_portion) || 0;
                const VC_I_mineral_soil_portion = Number(charac.VC_I_mineral_soil_portion) || 0;
                const Ratepeatloss_BSL_i = Number(charac.Ratepeatloss_BSL_i) || 0;
                const RateCloss_BSL_i = Number(charac.RateCloss_BSL_i) || 0;
                const A_WPS_i_t100 = Number(charac.A_WPS_i_t100) || 0;
                const A_BSL_i_t100 = Number(charac.A_BSL_i_t100) || 0;
    
                // VM0033 Equation 5.2.1.1 - Project scenario carbon at 100 years
                const depth_peat_WPS_t100 =
                    depth_peat_i_t0 -
                    calculate_peat_strata_input_coverage_100_years(
                        peat_strata_input_coverage_100_years,
                        stratum_i
                    );
    
                // Project organic soil carbon (preserved peat)
                const C_WPS_i_t100_organic_soil =
                    charac.soil_type_t0 === "Peatsoil"
                        ? depth_peat_WPS_t100 * VC_I_peat_portion * 10  // Convert to tC/ha
                        : 0;
    
                // Project mineral soil carbon (preserved mineral soil)
                const C_WPS_i_t100_mineral_soil =
                    getCBSL_i_t0(temporal_boundary, stratum_i) -
                    calculate_non_peat_strata_input_coverage_100_years(
                        non_peat_strata_input_coverage_100_years,
                        stratum_i
                    );
    
                const C_WPS_i_t100 =
                    C_WPS_i_t100_organic_soil + C_WPS_i_t100_mineral_soil;
    
                // VM0033 Equation 5.2.1.2 - Baseline scenario carbon at 100 years
                const depth_peat_BSL_t100 =
                    depth_peat_i_t0 - 100 * Ratepeatloss_BSL_i;  // Peat lost over 100 years
    
                const C_BSL_i_t100_organic_soil =
                    charac.soil_type_t0 === "Peatsoil"
                        ? depth_peat_BSL_t100 * VC_I_peat_portion * 10
                        : 0;
    
                // Calculate remaining years after peat depletion for mineral soil loss
                const remaining_years_after_peat_depletion_BSL =
                    calculateRemainingPercentage(Ratepeatloss_BSL_i, depth_peat_i_t0);
    
                const C_BSL_i_t100_mineral_soil =
                    getCBSL_i_t0(temporal_boundary, stratum_i) -
                    remaining_years_after_peat_depletion_BSL * RateCloss_BSL_i;
    
                const C_BSL_i_t100 =
                    charac.soil_type_t0 === "Peatsoil"
                        ? C_BSL_i_t100_organic_soil
                        : C_BSL_i_t100_mineral_soil;
    
                // VM0033 Equation 5.2.1.3 - Area-weighted carbon stock sums
                sumWPS += C_WPS_i_t100 * A_WPS_i_t100;
                sumBSL += C_BSL_i_t100 * A_BSL_i_t100;
    
                // Store detailed calculations for each stratum
                total_stock_approach_parameters.push({
                    stratum_i,
                    C_WPS_i_t100,
                    depthpeat_WPS_i_t100: Math.max(depth_peat_WPS_t100, 0),
                    C_WPS_i_t100_organic_soil,
                    C_WPS_i_t100_mineral_soil: Math.max(C_WPS_i_t100_mineral_soil, 0),
                    Depthpeat_BSL_i_t100: Math.max(depth_peat_BSL_t100, 0),
                    C_BSL_i_t100_organic_soil,
                    remaining_years_after_peat_depletion_BSL,
                    C_BSL_i_t100_mineral_soil: Math.max(
                        getCBSL_i_t0(temporal_boundary, stratum_i) - 100 * RateCloss_BSL_i,
                        0
                    ),
                    C_BSL_i_t100,
                    type: total_stock_approach_parameters?.[0]?.type,
                    "@context": total_stock_approach_parameters?.[0]?.["@context"] ?? [],
                });
            });
    
        // Remove template element after processing
        total_stock_approach_parameters.shift();
    
        // VM0033 Equation 5.2.1.4 - Check if project stocks are ≥ 105% of baseline
        const condition = sumWPS >= sumBSL * 1.05;
    
        return {
            condition,
            sumWPS,
            sumBSL,
            diff: condition ? sumWPS - sumBSL : 0,  // Only credit if condition met
        };
    }
    // From er-calculations.js:461-506 - Stock Loss Approach implementation
    function stockLossApproach(baseline, stock_loss_approach_parameters,
        peat_strata_input_coverage_100_years, non_peat_strata_input_coverage_100_years, temporal_boundary) {
    
        baseline.yearly_data_for_baseline_GHG_emissions[0].annual_stratum_parameters.forEach(stratum => {
            const { stratum_i } = stratum;
            const META = {
                type: stock_loss_approach_parameters?.[0]?.type,
                '@context': stock_loss_approach_parameters?.[0]?.['@context'] ?? [],
            };
    
            // VM0033 Equation 5.2.2.1 - Calculate carbon loss over 100 years
    
            // Peat carbon loss calculations
            const total_peat_volume_loss = calculate_peat_strata_input_coverage_100_years(
                peat_strata_input_coverage_100_years, stratum_i) *
                stratum.stratum_characteristics.VC_I_peat_portion;
    
            const Closs_BSL_t100_organic_soil = 10 * 100 * (
                stratum.stratum_characteristics.Ratepeatloss_BSL_i *
                stratum.stratum_characteristics.VC_I_peat_portion);
    
            const Closs_WPS_t100_organic_soil = 10 * total_peat_volume_loss;
    
            // Mineral soil carbon loss calculations
            const total_carbon_loss_volume = calculate_non_peat_strata_input_coverage_100_years(
                non_peat_strata_input_coverage_100_years, stratum_i) *
                stratum.stratum_characteristics.VC_I_mineral_soil_portion;
    
            const Closs_BSL_t100_mineral_soil = 10 * 100 * (
                stratum.stratum_characteristics.RateCloss_BSL_i *
                stratum.stratum_characteristics.VC_I_mineral_soil_portion);
    
            const Closs_WPS_t100_mineral_soil = 10 * total_carbon_loss_volume;
    
            // Choose appropriate carbon loss based on soil type
            const Closs_BSL_i_t100 = stratum.stratum_characteristics.soil_type_t0 === 'Peatsoil'
                ? Closs_BSL_t100_organic_soil
                : Closs_BSL_t100_mineral_soil;
    
            const Closs_WPS_i_t100 = stratum.stratum_characteristics.soil_type_t0 === 'Peatsoil'
                ? Closs_WPS_t100_organic_soil
                : Closs_WPS_t100_mineral_soil;
    
            // VM0033 Equation 5.2.2.2 - Area-weighted total carbon losses
            const total_baseline_carbon_loss = Closs_BSL_i_t100 * stratum.stratum_characteristics.A_BSL_i;
            const total_project_carbon_loss = Closs_WPS_i_t100 * stratum.stratum_characteristics.A_WPS_i;
    
            // Store calculations for this stratum
            stock_loss_approach_parameters.push({
                "stratum_i": stratum_i,
                "total_peat_volume_loss": total_peat_volume_loss,
                "Closs_BSL_t100_organic_soil": Closs_BSL_t100_organic_soil,
                "Closs_WPS_t100_organic_soil": Closs_WPS_t100_organic_soil,
                "total_carbon_loss_volume": total_carbon_loss_volume,
                "Closs_BSL_t100_mineral_soil": Closs_BSL_t100_mineral_soil,
                "Closs_WPS_t100_mineral_soil": Closs_WPS_t100_mineral_soil,
                "Closs_BSL_i_t100": Closs_BSL_i_t100,
                "Closs_WPS_i_t100": Closs_WPS_i_t100,
                "total_baseline_carbon_loss": total_baseline_carbon_loss,
                "total_project_carbon_loss": total_project_carbon_loss,
                ...META
            })
        })
    
        // Remove template element
        stock_loss_approach_parameters.shift();
    
        // VM0033 Equation 5.2.2.3 - Sum across all strata
        const total_baseline_carbon_loss_sum = stock_loss_approach_parameters.reduce(
            (acc, curr) => acc + curr.total_baseline_carbon_loss, 0);
        const total_project_carbon_loss_sum = stock_loss_approach_parameters.reduce(
            (acc, curr) => acc + curr.total_project_carbon_loss, 0);
    
        return {
            total_baseline_carbon_loss_sum: total_baseline_carbon_loss_sum,
            total_project_carbon_loss_sum: total_project_carbon_loss_sum,
            diff: total_baseline_carbon_loss_sum - total_project_carbon_loss_sum  // Carbon saved
        }
    }
    // From er-calculations.js:508-514 - SOC approach selector
    function SOC_MAX_calculation(baseline, peat_strata_input_coverage_100_years,
        non_peat_strata_input_coverage_100_years, temporal_boundary, approach, ineligible_wetland_areas) {
    
        if (approach === 'Total stock approach') {
            ineligible_wetland_areas.SOC_MAX = totalStockApproach(
                baseline,
                ineligible_wetland_areas.total_stock_approach_parameters,
                peat_strata_input_coverage_100_years,
                non_peat_strata_input_coverage_100_years,
                temporal_boundary
            ).diff
        } else {
            ineligible_wetland_areas.SOC_MAX = stockLossApproach(
                baseline,
                ineligible_wetland_areas.stock_loss_approach_parameters,
                peat_strata_input_coverage_100_years,
                non_peat_strata_input_coverage_100_years,
                temporal_boundary
            ).diff
        }
    }
    // From er-calculations.js:39-69 - Process submergence monitoring data
    function processMonitoringSubmergence(subInputs = {}) {
        const years = subInputs.submergence_monitoring_data ?? [];
    
        for (const yrRec of years) {
            const {
                monitoring_year,
                submergence_measurements_for_each_stratum: strata = []
            } = yrRec;
    
            // Process each stratum's submergence data for this monitoring year
            for (const s of strata) {
                const {
                    stratum_i,                                      // Stratum identifier
                    is_submerged,                                   // Boolean: is this stratum submerged?
                    submergence_T,                                  // Time period of submergence (years)
                    area_submerged_percentage,                      // Percentage of stratum area submerged
                    C_BSL_agbiomass_i_t_ar_tool_14,               // Initial baseline above-ground biomass
                    C_BSL_agbiomass_i_t_to_T_ar_tool_14,          // Baseline biomass at time T
                    delta_C_BSL_agbiomass_i_t                      // Calculated delta (output)
                } = s;
    
                if (is_submerged) {
                    // VM0033 Equation 6.1 - Calculate biomass change due to submergence
                    const tempDelta = (C_BSL_agbiomass_i_t_ar_tool_14 - C_BSL_agbiomass_i_t_to_T_ar_tool_14) / submergence_T;
                    const tempDeltaFinal = tempDelta * area_submerged_percentage;
    
                    // Apply methodology constraint: negative deltas set to zero
                    if (tempDeltaFinal < 0) {
                        s.delta_C_BSL_agbiomass_i_t = 0;
                    } else {
                        s.delta_C_BSL_agbiomass_i_t = tempDeltaFinal;
                    }
                } else {
                    // No submergence = no biomass change
                    s.delta_C_BSL_agbiomass_i_t = 0;
                }
            }
        }
    }
    // From er-calculations.js:71-91 - Retrieve biomass delta for specific stratum/year
    function getDeltaCBSLAGBiomassForStratumAndYear(
        subInputs = {},
        stratumId,
        year
    ) {
        const results = [];
    
        // Search through all monitoring year records
        for (const yrRec of subInputs.submergence_monitoring_data ?? []) {
            // Check each stratum measurement in this monitoring year
            for (const s of yrRec.submergence_measurements_for_each_stratum ?? []) {
                // Match stratum ID and year criteria
                if (String(s.stratum_i) === String(stratumId) && (year < yrRec.monitoring_year)) {
                    results.push({
                        year: yrRec.monitoring_year,
                        delta: s.delta_C_BSL_agbiomass_i_t,
                    });
                }
            }
        }
    
        // Return results or default if no matches found
        return results.length ? results : [{ year: null, delta: 0 }];
    }
    // From er-calculations.js:95-115 - Baseline allocation deduction calculation
    function computeDeductionAllochBaseline(params) {
        const {
            baseline_soil_SOC,        // Is baseline soil SOC included?
            soil_insitu_approach,     // Soil measurement approach
            soil_type,               // Soil type (Peatsoil vs others)
            AU5,                     // Soil emissions value
            AV5,                     // Allocation percentage
            BB5                      // Alternative emissions value
        } = params;
    
        // No deduction if soil SOC not included or peat soil
        if (baseline_soil_SOC !== true) return 0;
        if (soil_type === "Peatsoil") return 0;
    
        const fraction = AV5 / 100;  // Convert percentage to fraction
    
        // Apply appropriate calculation based on measurement approach
        if (soil_insitu_approach === "Proxies" || soil_insitu_approach === "Field-collected data") {
            return AU5 * fraction;
        }
    
        return BB5 * fraction;
    }
    
    // From er-calculations.js:117-137 - Project allocation deduction calculation
    function computeDeductionAllochProject(params) {
        const {
            project_soil_SOC,        // Is project soil SOC included?
            soil_insitu_approach,    // Soil measurement approach
            soil_type,               // Soil type
            AK5,                     // Project soil emissions value
            AL5,                     // Allocation percentage
            AR5                      // Alternative emissions value
        } = params;
    
        // Same logic as baseline but for project scenario
        if (project_soil_SOC !== true) return 0;
        if (soil_type === "Peatsoil") return 0;
    
        const fraction = AL5 / 100;
    
        if (soil_insitu_approach === "Proxies" || soil_insitu_approach === "Field-collected data") {
            return AK5 * fraction;
        }
    
        return AR5 * fraction;
    }
    // From er-calculations.js:140-169 - Emission value getters by year
    function getFireReductionPremiumPerYear(data, year_t) {
        return (data ?? [])
            .find(r => String(r.year_t) === String(year_t))
            ?.fire_reduction_premium_per_year ?? 0;
    }
    
    function getGHGBSL(data, year_t) {
        return (data ?? [])
            .find(r => String(r.year_t) === String(year_t))
            ?.GHG_BSL ?? 0;
    }
    
    function getGHGWPS(data, year_t) {
        return (data ?? [])
            .find(r => String(r.year_t) === String(year_t))
            ?.GHG_WPS ?? 0;
    }
    
    function getGHGBSLBiomass(data, year_t) {
        return (data ?? [])
            .find(r => String(r.year_t) === String(year_t))
            ?.GHG_BSL_biomass ?? 0;
    }
    
    function getGHGWPSBiomass(data, year_t) {
        return (data ?? [])
            .find(r => String(r.year_t) === String(year_t))
            ?.GHG_WPS_biomass ?? 0;
    }
    // From er-calculations.js:170-179 - VCU change calculations
    function calculateNetERRChange(O6, O5, T6, T5, U6) {
        // Calculate change in emission reductions between periods
        // O6, O5: Current and previous emission reduction values
        // T6, T5: Current and previous stock values
        // U6: Buffer percentage
        return (O6 - O5) - (T6 - T5) * U6;
    }
    
    function calculateNetVCU(O6, O5, V6) {
        // Calculate net VCUs considering buffer deductions
        // V6: Buffer deduction for this period
        return (O6 - O5) - V6;
    }
    // From er-calculations.js:1126-1184 - Complete parameter extraction
    function processInstance(instance, project_boundary) {
        const data = instance.project_instance;
        const projectBoundary = project_boundary;
    
        // ── PROJECT BOUNDARY EXTRACTION (Maps to ProjectBoundary worksheet) ──
        // Baseline scenario boundaries (determines what gets calculated)
        const BaselineAboveGroundTreeBiomass = getProjectBoundaryValue(projectBoundary, 'baseline_aboveground_tree_biomass');
        const BaselineAboveGroundNonTreeBiomass = getProjectBoundaryValue(projectBoundary, 'baseline_aboveground_non_tree_biomass');
        const BaselineBelowGroundBiomass = getProjectBoundaryValue(projectBoundary, 'baseline_below_ground_biomass');
        const BaselineLitter = getProjectBoundaryValue(projectBoundary, 'baseline_litter');
        const BaselineDeadWood = getProjectBoundaryValue(projectBoundary, 'baseline_dead_wood');
        const BaselineSoil = getProjectBoundaryValue(projectBoundary, 'baseline_soil');
        const BaselineWoodProducts = getProjectBoundaryValue(projectBoundary, 'baseline_wood_products');
        const BaselineMethaneProductionByMicrobes = getProjectBoundaryValue(projectBoundary, 'baseline_methane_production_by_microbes');
        const BaselineDenitrificationNitrification = getProjectBoundaryValue(projectBoundary, 'baseline_denitrification_nitrification');
        const BaselineBurningBiomassOrganicSoil = getProjectBoundaryValue(projectBoundary, 'baseline_burning_of_biomass_and_organic_soil');
        const BaselineFossilFuelUseCO2 = getProjectBoundaryValue(projectBoundary, 'baseline_fossil_fuel_use_CO2');
        const BaselineFossilFuelUseCH4 = getProjectBoundaryValue(projectBoundary, 'baseline_fossil_fuel_use_CH4');
        const BaselineFossilFuelUseN2O = getProjectBoundaryValue(projectBoundary, 'baseline_fossil_fuel_use_N2O');
    
        // Project scenario boundaries (what the restoration project includes)
        const ProjectAboveTreeBiomass = getProjectBoundaryValue(projectBoundary, 'project_aboveground_tree_biomass');
        const ProjectAboveNonTreeBiomass = getProjectBoundaryValue(projectBoundary, 'project_aboveground_non_tree_biomass');
        const ProjectBelowGroundBiomass = getProjectBoundaryValue(projectBoundary, 'project_below_ground_biomass');
        const ProjectLitter = getProjectBoundaryValue(projectBoundary, 'project_litter');
        const ProjectDeadWood = getProjectBoundaryValue(projectBoundary, 'project_dead_wood');
        const ProjectSoil = getProjectBoundaryValue(projectBoundary, 'project_soil');
        const ProjectWoodProducts = getProjectBoundaryValue(projectBoundary, 'project_wood_products');
        const ProjectMethaneProductionByMicrobes = getProjectBoundaryValue(projectBoundary, 'project_methane_production_by_microbes');
        const ProjectDenitrificationNitrification = getProjectBoundaryValue(projectBoundary, 'project_denitrification_nitrification');
        const ProjectBurningBiomass = getProjectBoundaryValue(projectBoundary, 'project_burning_of_biomass');
        const ProjectFossilFuelUseCO2 = getProjectBoundaryValue(projectBoundary, 'project_fossil_fuel_use_CO2');
        const ProjectFossilFuelUseCH4 = getProjectBoundaryValue(projectBoundary, 'project_fossil_fuel_use_CH4');
        const ProjectFossilFuelUseN2O = getProjectBoundaryValue(projectBoundary, 'project_fossil_fuel_use_N2O');
    
        // ── QUANTIFICATION APPROACH (Maps to QuantificationApproach worksheet) ──
        const QuantificationCO2EmissionsSoil = getQuantificationValue(data, 'quantification_co2_emissions_soil');
        const QuantificationCH4EmissionsSoil = getQuantificationValue(data, 'quantification_ch4_emissions_soil');
        const QuantificationN2OEmissionsSoil = getQuantificationValue(data, 'quantification_n2o_emissions_soil');
        const QuantificationSOCCapApproach = getQuantificationValue(data, 'quantification_soc_cap_approach');
        const QuantificationBaselineCO2Reduction = getQuantificationValue(data, 'quantification_baseline_co2_reduction');
        const QuantificationNERRWEMaxCap = getQuantificationValue(data, 'quantification_nerrwe_max_cap');
        const QuantificationFireReductionPremium = getQuantificationValue(data, 'quantification_fire_reduction_premium');
        const FireReductionPremiumArray = QuantificationFireReductionPremium ? getQuantificationValue(data, 'fire_reduction_premium') : [];
    
        // ── INDIVIDUAL PARAMETERS (Maps to IndividualParameters worksheet) ──
        // Smart parameter extraction - only get values if they're needed based on project boundary
        const GWP_CH4 = (BaselineMethaneProductionByMicrobes || BaselineBurningBiomassOrganicSoil ||
                         ProjectMethaneProductionByMicrobes || ProjectBurningBiomass) ?
                         getIndividualParam(data, 'gwp_ch4') : 0;
        const GWP_N2O = (BaselineDenitrificationNitrification || BaselineBurningBiomassOrganicSoil ||
                         ProjectDenitrificationNitrification || ProjectBurningBiomass) ?
                         getIndividualParam(data, 'gwp_n2o') : 0;
        const IsBurningOfBiomass = getIndividualParam(data, 'is_burning_of_biomass');
        const IsNERRWEMaxCap = getIndividualParam(data, 'is_NERRWE_max_cap');
        const AllowableUncertainty = getIndividualParam(data, 'individual_params_allowable_uncert');
        const BufferPercent = getIndividualParam(data, 'individual_params_buffer_%');
        const NERError = getIndividualParam(data, 'individual_params_NER_ERROR');
        const CreditingPeriod = getIndividualParam(data, 'individual_params_crediting_period');
        const EF_N2O_Burn = IsBurningOfBiomass ? getIndividualParam(data, 'EF_n20_burn') : 0;
        const EF_CH4_Burn = IsBurningOfBiomass ? getIndividualParam(data, 'EF_ch4_burn') : 0;
        const NERRWE_Max = IsNERRWEMaxCap ? getIndividualParam(data, 'NERRWE_max') : 0;
    }
    // From er-calculations.js:1185-1221 - Monitoring data processing
        // ── MONITORING PERIOD INPUTS (Maps to MonitoringPeriodInputs worksheet) ──
        const IsBaselineAbovegroundNonTreeBiomass = getMonitoringValue(data, 'is_baseline_aboveground_non_tree_biomass');
        const IsProjectAbovegroundNonTreeBiomass = getMonitoringValue(data, 'is_project_aboveground_non_tree_biomass');
    
        // Initialize monitoring data arrays
        let BaselineSoilCarbonStockMonitoringData = [];
        let ProjectSoilCarbonStockMonitoringData = [];
        let BaselineHerbaceousVegetationMonitoringData = [];
        let ProjectHerbaceousVegetationMonitoringData = [];
    
        // Extract submergence monitoring data (critical for VM0033)
        const SubmergenceMonitoringData = getMonitoringValue(data, 'submergence_monitoring_data');
    
        // Conditional data extraction based on project boundary and quantification approach
        BaselineSoilCarbonStockMonitoringData = (BaselineSoil && QuantificationCO2EmissionsSoil === 'Field-collected data') ?
            getMonitoringValue(data, 'baseline_soil_carbon_stock_monitoring_data') : [];
        ProjectSoilCarbonStockMonitoringData = (ProjectSoil && QuantificationCO2EmissionsSoil === 'Field-collected data') ?
            getMonitoringValue(data, 'project_soil_carbon_stock_monitoring_data') : [];
        BaselineHerbaceousVegetationMonitoringData = IsBaselineAbovegroundNonTreeBiomass ?
            getMonitoringValue(data, 'baseline_herbaceous_vegetation_monitoring_data') : [];
        ProjectHerbaceousVegetationMonitoringData = IsProjectAbovegroundNonTreeBiomass ?
            getMonitoringValue(data, 'project_herbaceous_vegetation_monitoring_data') : [];
    
        // ── WOOD PRODUCT PROJECT SCENARIO (Maps to IF Wood Product Is Included worksheet) ──
        let WoodProductDjCFjBCEF = [];
        let WoodProductSLFty = [];
        let WoodProductOfty = [];
        let WoodProductVexPcomi = [];
        let WoodProductCAVGTREEi = [];
    
        // Only extract wood product data if project boundary includes it
        if (ProjectWoodProducts) {
            WoodProductDjCFjBCEF = getWoodProductValue(data, 'wood_product_Dj_CFj_BCEF');
            WoodProductSLFty = getWoodProductValue(data, 'wood_product_SLFty');
            WoodProductOfty = getWoodProductValue(data, 'wood_product_Ofty');
            WoodProductVexPcomi = getWoodProductValue(data, 'wood_product_Vex_Pcomi');
            WoodProductCAVGTREEi = getWoodProductValue(data, 'wood_product_CAVG_TREE_i');
        }
    // From er-calculations.js:1221-1241 - Calculation orchestration
        // ── CALCULATION SEQUENCE ──
    
        // Step 1: Process submergence monitoring data (required for biomass calculations)
        processMonitoringSubmergence(data.monitoring_period_inputs);
    
        // Step 2: Establish temporal boundaries (required for all subsequent calculations)
        const temporalBoundary = data.temporal_boundary;
        calculatePDTSDT(data.baseline_emissions, QuantificationBaselineCO2Reduction, temporalBoundary, CreditingPeriod);
    
        // Step 3: Calculate baseline emissions (maps to 8.1BaselineEmissions worksheet)
        processBaselineEmissions(
            data.baseline_emissions,
            CreditingPeriod,
            BaselineMethaneProductionByMicrobes,
            QuantificationCH4EmissionsSoil,
            GWP_CH4,
            BaselineDenitrificationNitrification,
            QuantificationN2OEmissionsSoil,
            GWP_N2O,
            data.monitoring_period_inputs,
            temporalBoundary
        );
    
        // Step 4: Calculate project emissions (maps to 8.2ProjectEmissions worksheet)
        processProjectEmissions(
            data.project_emissions,
            ProjectMethaneProductionByMicrobes,
            QuantificationCH4EmissionsSoil,
            GWP_CH4,
            ProjectDenitrificationNitrification,
            QuantificationN2OEmissionsSoil,
            GWP_N2O,
            EF_N2O_Burn,
            EF_CH4_Burn,
            ProjectBurningBiomass
        );
    
        // Step 5: Calculate SOC_MAX using appropriate approach (maps to 5.2.4_Ineligible wetland areas worksheet)
        SOC_MAX_calculation(
            data.baseline_emissions,
            data.peat_strata_input_coverage_100_years,
            data.non_peat_strata_input_coverage_100_years,
            temporalBoundary,
            QuantificationSOCCapApproach,
            data.ineligible_wetland_areas
        );
    
        // Step 6: Calculate final net emission reductions and VCUs (maps to 8.5NetERR worksheet)
        processNETERR(
            data.baseline_emissions,
            data.project_emissions,
            data.net_ERR,
            data.ineligible_wetland_areas.SOC_MAX,
            QuantificationBaselineCO2Reduction,
            QuantificationFireReductionPremium,
            FireReductionPremiumArray,
            IsNERRWEMaxCap,
            NERRWE_Max,
            NERError,
            AllowableUncertainty,
            BufferPercent
        );
    }
    // From er-calculations.js:1243-1261 - Guardian customLogicBlock entry point
    function calc() {
        const document = documents[0].document;    // Guardian passes documents array
        const creds = document.credentialSubject;  // Extract credential subjects
    
        let totalVcus = 0;
    
        // Process each credential (can be multiple projects)
        for (const cred of creds) {
            // Process each project instance (can be multiple sites per project)
            for (const instance of cred.project_data_per_instance) {
                // This calls the complete processInstance orchestration we covered
                processInstance(instance, cred.project_boundary);
    
                // Accumulate VCUs from this instance
                totalVcus += instance.project_instance.net_ERR.total_VCU_per_instance;
            }
    
            // Store total for this credential
            cred.total_vcus = totalVcus;
        }
    
        // Guardian callback - return processed document
        done(adjustValues(document.credentialSubject[0]));
    }
    // From er-calculations.js:517-713 - Complete baseline emissions processing
    function processBaselineEmissions(baseline, crediting_period, baseline_soil_CH4, soil_CH4_approach,
        GWP_CH4, baseline_soil_N2O, soil_N2O_approach, GWP_N2O, monitoring_submergence_data, temporal_boundary) {
    
        // Process each monitoring year in the baseline scenario
        for (const yearRec of baseline.yearly_data_for_baseline_GHG_emissions ?? []) {
            const { year_t } = yearRec;
    
            // Process each stratum within this year
            for (const stratum of yearRec.annual_stratum_parameters ?? []) {
                const { stratum_i } = stratum;
                const sc = stratum.stratum_characteristics ?? {};
                const asl = stratum.annual_stratum_level_parameters ?? {};
    
                // ── AR TOOL INTEGRATION ────────────────────────────────────────
                // Extract AR Tool 14 results (afforestation/reforestation calculations)
                asl.delta_CTREE_BSL_i_t_ar_tool_14 = stratum.ar_tool_14.delta_C_TREE;
                asl.delta_CSHRUB_BSL_i_t_ar_tool_14 = stratum.ar_tool_14.delta_C_SHRUB;
    
                // Extract AR Tool 05 results (fuel consumption calculations)
                asl.ET_FC_I_t_ar_tool_5_BSL = stratum.ar_tool_05.ET_FC_y;
    
                // Check if this stratum quantifies baseline reduction
                const isProjectQuantifyBSLReduction = sc.is_project_quantify_BSL_reduction;
    
                // ── BIOMASS CALCULATIONS ───────────────────────────────────────
                // Apply above-ground non-tree biomass logic
                if (asl.is_aboveground_non_tree_biomass) {
                    asl.delta_CSHRUB_BSL_i_t_ar_tool_14 = 0;  // Zero out shrubs if non-tree biomass included
                }
    
                // VM0033 Equation 8.1.2 - Tree and shrub biomass change
                asl.delta_C_BSL_tree_or_shrub_i_t = const_12_by_44 * (
                    asl.delta_CTREE_BSL_i_t_ar_tool_14 + asl.delta_CSHRUB_BSL_i_t_ar_tool_14
                );
    
                // Handle herbaceous vegetation
                if (asl.is_aboveground_non_tree_biomass) {
                    asl.delta_C_BSL_herb_i_t = 0;  // Set to zero if already included above
                }
    
                // ── SOIL CO2 EMISSIONS ─────────────────────────────────────────
                if (asl.is_soil) {
                    const method = sc.co2_emissions_from_soil;
    
                    switch (method) {
                        case "Field-collected data":
                            // VM0033 Equation 8.1.1 - Direct field measurements
                            asl.GHGBSL_soil_CO2_i_t = -(const_44_by_12 * asl.delta_C_BSL_soil_i_t);
                            break;
    
                        case "Proxies":
                            // Use proxy data when direct measurement not available
                            asl.GHGBSL_soil_CO2_i_t = asl.GHG_emission_proxy_GHGBSL_soil_CO2_i_t;
                            break;
    
                        default:
                            // Sum of individual emission sources
                            asl.GHGBSL_soil_CO2_i_t = (asl.GHGBSL_insitu_CO2_i_t ?? 0) +
                                                      (asl.GHGBSL_eroded_CO2_i_t ?? 0) +
                                                      (asl.GHGBSL_excav_CO2_i_t ?? 0);
                    }
                } else {
                    asl.GHGBSL_soil_CO2_i_t = 0;  // No soil emissions for this stratum
                }
    
                // ── ALLOCATION DEDUCTIONS ──────────────────────────────────────
                // Calculate allocation deductions using the utility function
                asl.Deduction_alloch = computeDeductionAllochBaseline({
                    baseline_soil_SOC: asl.is_soil,
                    soil_insitu_approach: sc.co2_emissions_from_soil,
                    soil_type: sc.soil_type_t0,
                    AU5: asl.GHGBSL_soil_CO2_i_t,
                    AV5: asl.is_soil ? asl.percentage_C_alloch_BSL : 0,
                    BB5: (asl.is_soil && sc.co2_emissions_from_soil === "Others") ?
                         asl.GHGBSL_insitu_CO2_i_t : 0
                });
    
                // ── CH4 EMISSIONS FROM SOIL ────────────────────────────────────
                if (baseline_soil_CH4) {
                    const method = soil_CH4_approach;
    
                    switch (method) {
                        case "IPCC emission factors":
                            asl.GHGBSL_soil_CH4_i_t = asl.IPCC_emission_factor_ch4_BSL * GWP_CH4;
                            break;
    
                        case "Proxies":
                            asl.GHGBSL_soil_CH4_i_t = asl.GHG_emission_proxy_ch4_BSL * GWP_CH4;
                            break;
    
                        default:
                            asl.GHGBSL_soil_CH4_i_t = asl.CH4_BSL_soil_i_t * GWP_CH4;
                    }
                } else {
                    asl.GHGBSL_soil_CH4_i_t = 0;
                }
    
                // ── N2O EMISSIONS FROM SOIL ────────────────────────────────────
                if (baseline_soil_N2O) {
                    const method = soil_N2O_approach;
    
                    switch (method) {
                        case "IPCC emission factors":
                            asl.GHGBSL_soil_N2O_i_t = asl.IPCC_emission_factor_n2o_BSL * GWP_N2O;
                            break;
    
                        case "Proxies":
                            asl.GHGBSL_soil_N2O_i_t = asl.N2O_emission_proxy_BSL * GWP_N2O;
                            break;
    
                        default:
                            asl.GHGBSL_soil_N2O_i_t = asl.N2O_BSL_soil_I_t * GWP_N2O;
                    }
                } else {
                    asl.GHGBSL_soil_N2O_i_t = 0;
                }
    
                // ── TEMPORAL BOUNDARY APPLICATION ──────────────────────────────
                // This is where the PDT/SDT system gets applied to actual calculations
                const endPDT = isProjectQuantifyBSLReduction ?
                              getEndPDTPerStratum(temporal_boundary, stratum_i) : crediting_period;
                const endSDT = isProjectQuantifyBSLReduction ?
                              getEndSDTPerStratum(temporal_boundary, stratum_i) : crediting_period;
    
                if (isProjectQuantifyBSLReduction) {
                    const emissionsArray = baseline.yearly_data_for_baseline_GHG_emissions || [];
                    const startYear = getStartYear(emissionsArray);
                    const period = year_t - startYear + 1;
    
                    // VM0033 Equation 8.1.26 - Apply temporal boundary constraints
                    if (period > endPDT && period > endSDT) {
                        // Beyond depletion periods - no soil emissions
                        asl.GHGBSL_soil_i_t = 0;
                    } else {
                        // Within depletion periods - calculate full soil emissions
                        asl.GHGBSL_soil_i_t = asl.A_i_t * (
                            asl.GHGBSL_soil_CO2_i_t - asl.Deduction_alloch +
                            asl.GHGBSL_soil_CH4_i_t + asl.GHGBSL_soil_N2O_i_t
                        );
                    }
                } else {
                    // No temporal boundary constraints
                    asl.GHGBSL_soil_i_t = asl.A_i_t * (
                        asl.GHGBSL_soil_CO2_i_t - asl.Deduction_alloch +
                        asl.GHGBSL_soil_CH4_i_t + asl.GHGBSL_soil_N2O_i_t
                    );
                }
    
                // ── BIOMASS CALCULATION WITH SUBMERGENCE ──────────────────────
                // VM0033 Equation 8.1.23 - Integrate submergence monitoring data
                const monitoring_submergence = getDeltaCBSLAGBiomassForStratumAndYear(
                    monitoring_submergence_data, stratum_i, yearRec.year_t
                );
                asl.delta_C_BSL_biomass_𝑖_t = asl.delta_C_BSL_tree_or_shrub_i_t +
                                             asl.delta_C_BSL_herb_i_t -
                                             monitoring_submergence[0].delta;
    
                // ── FUEL CONSUMPTION EMISSIONS ─────────────────────────────────
                if (asl.is_fossil_fuel_use) {
                    asl.GHGBSL_fuel_i_t = asl.ET_FC_I_t_ar_tool_5_BSL;  // From AR Tool 05
                } else {
                    asl.GHGBSL_fuel_i_t = 0;
                }
            }
    
            // ── YEAR-LEVEL AGGREGATIONS ────────────────────────────────────
            // Sum biomass changes across all strata for this year
            const sum_delta_C_BSL_biomass = yearRec.annual_stratum_parameters
                .reduce((acc, s) => acc + (Number(s.annual_stratum_level_parameters
                    .delta_C_BSL_biomass_𝑖_t) || 0), 0);
    
            // Convert carbon changes to CO2 equivalent
            yearRec.GHG_BSL_biomass = -(sum_delta_C_BSL_biomass * const_44_by_12);
    
            // Sum soil emissions across all strata
            const sum_GHG_BSL_soil = yearRec.annual_stratum_parameters.reduce(
                (acc, s) => acc + (Number(s.annual_stratum_level_parameters.GHGBSL_soil_i_t) || 0), 0
            );
            yearRec.GHG_BSL_soil = sum_GHG_BSL_soil;
    
            // Sum fuel emissions across all strata
            const sum_GHG_BSL_fuel = yearRec.annual_stratum_parameters.reduce(
                (acc, s) => acc + (Number(s.annual_stratum_level_parameters.GHGBSL_fuel_i_t) || 0), 0
            );
            yearRec.GHG_BSL_fuel = sum_GHG_BSL_fuel;
        }
    
        // ── CUMULATIVE CALCULATIONS ────────────────────────────────────────
        // Calculate cumulative totals across all years
        baseline.yearly_data_for_baseline_GHG_emissions.reduce((acc, rec) => {
            rec.GHG_BSL_biomass = acc + rec.GHG_BSL_biomass;
            return rec.GHG_BSL_biomass;
        }, 0);
    
        baseline.yearly_data_for_baseline_GHG_emissions.reduce((acc, rec) => {
            rec.GHG_BSL_soil = acc + rec.GHG_BSL_soil;
            return rec.GHG_BSL_soil;
        }, 0);
    
        baseline.yearly_data_for_baseline_GHG_emissions.reduce((acc, rec) => {
            rec.GHG_BSL_fuel = acc + rec.GHG_BSL_fuel;
            return rec.GHG_BSL_fuel;
        }, 0);
    
        // Calculate total baseline emissions per year
        baseline.yearly_data_for_baseline_GHG_emissions.reduce((acc, rec) => {
            rec.GHG_BSL = rec.GHG_BSL_biomass + rec.GHG_BSL_soil + rec.GHG_BSL_fuel;
            return rec.GHG_BSL;
        }, 0);
    }
    function calculatePDTSDT(baseline, isProjectQuantifyBSLReduction, temporalBoundary, crediting_period) {
        let PDT = null;  // Peat Depletion Time
        let SDT = null;  // Soil organic carbon Depletion Time
    
        // Processing each stratum's peat depth data from test artifact
        baseline.stratum_data.forEach((stratum, stratum_index) => {
            if (stratum.peat_depth_data && stratum.peat_depth_data.length > 0) {
                stratum.peat_depth_data.forEach((peat_data, peat_index) => {
                    // VM0033 Equation 5.1 - Peat Depletion Time calculation
                    if (peat_data.peat_thickness_cm && peat_data.subsidence_rate_cm_yr) {
                        const calculated_PDT = peat_data.peat_thickness_cm / peat_data.subsidence_rate_cm_yr;
    
                        // Take minimum PDT across all strata (most conservative approach)
                        PDT = Math.min(PDT || calculated_PDT, calculated_PDT);
                    }
    
                    // VM0033 Equation 5.2 - Soil organic carbon Depletion Time
                    if (peat_data.soc_stock_t_ha && peat_data.soc_loss_rate_t_ha_yr) {
                        const calculated_SDT = peat_data.soc_stock_t_ha / peat_data.soc_loss_rate_t_ha_yr;
                        SDT = Math.min(SDT || calculated_SDT, calculated_SDT);
                    }
                });
            }
        });
    
        // Apply crediting period constraint from methodology
        const temporal_boundary_years = Math.min(PDT || crediting_period, SDT || crediting_period, crediting_period);
    
        return {
            PDT: PDT,
            SDT: SDT,
            temporal_boundary_years: temporal_boundary_years
        };
    }
    // From er-calculations.js:850-920 - fire emissions processing
    function processFireEmissions(baseline, temporal_boundary) {
        const fireEmissionsArray = {};
    
        baseline.stratum_data.forEach((stratum, stratum_index) => {
            if (stratum.fire_data && stratum.fire_data.length > 0) {
                stratum.fire_data.forEach((fire_data, fire_index) => {
                    const year = parseInt(fire_data.year);
    
                    // Above-ground biomass fire emissions (VM0033 Equation 8.1.3)
                    if (fire_data.fire_area_ha && fire_data.AGB_tC_ha &&
                        fire_data.combustion_factor && fire_data.CF_root) {
    
                        // AGB fire emissions calculation
                        const fire_emissions_AGB = fire_data.fire_area_ha *
                                                 fire_data.AGB_tC_ha *
                                                 fire_data.combustion_factor *
                                                 (44/12); // CO2 conversion factor
    
                        // Below-ground biomass fire emissions (VM0033 Equation 8.1.4)
                        const fire_emissions_BGB = fire_data.fire_area_ha *
                                                 fire_data.BGB_tC_ha *
                                                 fire_data.CF_root *
                                                 (44/12);
    
                        // Dead wood fire emissions (VM0033 Equation 8.1.5)
                        const fire_emissions_DW = fire_data.fire_area_ha *
                                                 fire_data.dead_wood_tC_ha *
                                                 fire_data.CF_dead_wood *
                                                 (44/12);
    
                        // Litter fire emissions (VM0033 Equation 8.1.6)
                        const fire_emissions_litter = fire_data.fire_area_ha *
                                                     fire_data.litter_tC_ha *
                                                     fire_data.CF_litter *
                                                     (44/12);
    
                        // Total fire emissions for this event
                        const total_fire_emissions = fire_emissions_AGB +
                                                    fire_emissions_BGB +
                                                    fire_emissions_DW +
                                                    fire_emissions_litter;
    
                        // Apply temporal boundary constraints
                        if (year <= temporal_boundary.temporal_boundary_years) {
                            fireEmissionsArray[year] = (fireEmissionsArray[year] || 0) + total_fire_emissions;
                        }
    
                        // Debug output for validation against test artifact
                        debug(`Fire emissions Year ${year}:`, {
                            stratum: stratum_index,
                            fire_event: fire_index,
                            AGB_emissions: fire_emissions_AGB,
                            BGB_emissions: fire_emissions_BGB,
                            total_emissions: total_fire_emissions
                        });
                    }
                });
            }
        });
    
        return fireEmissionsArray;
    }
    // From er-calculations.js:1090-1144 - Advanced stock approach selection
    function totalStockApproach(baseline, crediting_period, monitoring_submergence_data) {
        const stockData = {};
        const approachType = baseline.soil_carbon_quantification_approach;
    
        if (approachType === "total_stock_approach") {
            // Total Stock Approach: VM0033 Equation 5.2
            baseline.stratum_data.forEach((stratum, stratum_index) => {
                if (stratum.soil_carbon_data && stratum.soil_carbon_data.length > 0) {
                    stratum.soil_carbon_data.forEach((soc_data, soc_index) => {
                        const year = parseInt(soc_data.year);
    
                        // Calculate SOC_MAX using VM0033 Equation 5.2 parameters
                        if (soc_data.area_ha && soc_data.soc_stock_t_ha) {
                            // SOC_MAX = Area × SOC stock × CO2 conversion factor
                            const soc_max = soc_data.area_ha *
                                           soc_data.soc_stock_t_ha *
                                           (44/12); // tCO2 conversion
    
                            // Apply depth-weighted calculation if multiple soil layers
                            let depth_weighted_soc = soc_max;
                            if (soc_data.soil_layers && soc_data.soil_layers.length > 0) {
                                depth_weighted_soc = soc_data.soil_layers.reduce((total, layer) => {
                                    return total + (layer.thickness_cm * layer.soc_density_tC_m3 *
                                                  soc_data.area_ha * 0.01 * (44/12));
                                }, 0);
                            }
    
                            stockData[year] = (stockData[year] || 0) + depth_weighted_soc;
    
                            // Validate against test artifact expected values
                            debug(`SOC calculation Year ${year}:`, {
                                stratum: stratum_index,
                                area_ha: soc_data.area_ha,
                                soc_stock_t_ha: soc_data.soc_stock_t_ha,
                                calculated_soc_max: depth_weighted_soc
                            });
                        }
                    });
                }
            });
        } else if (approachType === "stock_loss_approach") {
            // Stock Loss Approach: VM0033 Equation 5.3
            baseline.stratum_data.forEach((stratum, stratum_index) => {
                if (stratum.soil_carbon_data && stratum.soil_carbon_data.length > 0) {
                    stratum.soil_carbon_data.forEach((soc_data, soc_index) => {
                        const year = parseInt(soc_data.year);
    
                        // Calculate annual SOC loss using VM0033 Equation 5.3
                        if (soc_data.area_ha && soc_data.annual_soc_loss_rate_t_ha_yr) {
                            const annual_soc_loss = soc_data.area_ha *
                                                  soc_data.annual_soc_loss_rate_t_ha_yr *
                                                  (44/12); // tCO2 conversion
    
                            // Apply submergence factor if wetland is partially submerged
                            let submergence_factor = 1.0;
                            if (monitoring_submergence_data && monitoring_submergence_data[year]) {
                                submergence_factor = monitoring_submergence_data[year].submergence_fraction;
                            }
    
                            const adjusted_soc_loss = annual_soc_loss * submergence_factor;
                            stockData[year] = (stockData[year] || 0) + adjusted_soc_loss;
                        }
                    });
                }
            });
        }
    
        return stockData;
    }
    function processProjectEmissions(project, project_soil_CH4, project_soil_CH4_approach,
                                   GWP_CH4, project_soil_N2O, soil_N2O_approach, GWP_N2O,
                                   EF_N2O_Burn, EF_CH4_Burn, isPrescribedBurningOfBiomass) {
    
        const projectEmissionsArray = {};
    
        // Process restoration emissions across multiple phases
        project.stratum_data.forEach((stratum, stratum_index) => {
            if (stratum.restoration_activities && stratum.restoration_activities.length > 0) {
                stratum.restoration_activities.forEach((activity, activity_index) => {
                    const year = parseInt(activity.year);
                    const activity_type = activity.activity_type;
    
                    // Phase 1: Site preparation emissions
                    if (activity_type === "site_preparation") {
                        // Machinery emissions from site clearing
                        if (activity.machinery_fuel_consumption_l && activity.emission_factor_kg_CO2_l) {
                            const machinery_emissions = activity.machinery_fuel_consumption_l *
                                                      activity.emission_factor_kg_CO2_l / 1000; // Convert to tCO2
    
                            projectEmissionsArray[year] = (projectEmissionsArray[year] || 0) +
                                                        machinery_emissions;
                        }
    
                        // Transportation emissions for equipment and materials
                        if (activity.transport_distance_km && activity.transport_emission_factor) {
                            const transport_emissions = activity.transport_distance_km *
                                                      activity.transport_emission_factor / 1000;
    
                            projectEmissionsArray[year] = (projectEmissionsArray[year] || 0) +
                                                        transport_emissions;
                        }
                    }
    
                    // Phase 2: Planting/seeding emissions
                    else if (activity_type === "planting") {
                        // Nursery operations emissions
                        if (activity.nursery_operations) {
                            const nursery_emissions = activity.nursery_operations.seedling_count *
                                                    activity.nursery_operations.emission_per_seedling_kg_CO2 / 1000;
    
                            projectEmissionsArray[year] = (projectEmissionsArray[year] || 0) +
                                                        nursery_emissions;
                        }
    
                        // Planting machinery emissions
                        if (activity.planting_machinery_fuel_l && activity.machinery_emission_factor) {
                            const planting_emissions = activity.planting_machinery_fuel_l *
                                                     activity.machinery_emission_factor / 1000;
    
                            projectEmissionsArray[year] = (projectEmissionsArray[year] || 0) +
                                                        planting_emissions;
                        }
                    }
    
                    // Phase 3: Maintenance emissions
                    else if (activity_type === "maintenance") {
                        // Annual maintenance activities
                        if (activity.maintenance_visits_per_year && activity.emission_per_visit_kg_CO2) {
                            const maintenance_emissions = activity.maintenance_visits_per_year *
                                                        activity.emission_per_visit_kg_CO2 / 1000;
    
                            projectEmissionsArray[year] = (projectEmissionsArray[year] || 0) +
                                                        maintenance_emissions;
                        }
                    }
    
                    // Debug validation against test artifact
                    debug(`Project emissions Year ${year}:`, {
                        stratum: stratum_index,
                        activity_type: activity_type,
                        emissions: projectEmissionsArray[year] || 0
                    });
                });
            }
        });
    
        return projectEmissionsArray;
    }
    // Enhanced soil CH4 and N2O emissions in project scenario
    if (project_soil_CH4 && project_soil_CH4_approach) {
        project.stratum_data.forEach((stratum, stratum_index) => {
            if (stratum.soil_ghg_data && stratum.soil_ghg_data.length > 0) {
                stratum.soil_ghg_data.forEach((ghg_data, ghg_index) => {
                    const year = parseInt(ghg_data.year);
    
                    // CH4 emissions calculation with water level dependency
                    if (ghg_data.area_ha && ghg_data.ch4_emission_factor_kg_ha_yr) {
                        let ch4_emission_factor = ghg_data.ch4_emission_factor_kg_ha_yr;
    
                        // Apply water level correction factor (VM0033 specific)
                        if (ghg_data.water_level_cm_above_soil) {
                            const water_level_factor = Math.max(0.1,
                                Math.min(2.0, ghg_data.water_level_cm_above_soil / 10.0));
                            ch4_emission_factor *= water_level_factor;
                        }
    
                        // Apply temperature correction (if available)
                        if (ghg_data.soil_temperature_celsius) {
                            const temp_factor = Math.exp(0.1 * (ghg_data.soil_temperature_celsius - 15));
                            ch4_emission_factor *= temp_factor;
                        }
    
                        const project_ch4_emissions = ghg_data.area_ha *
                                                     ch4_emission_factor *
                                                     GWP_CH4 / 1000; // Convert to tCO2eq
    
                        projectEmissionsArray[year] = (projectEmissionsArray[year] || 0) +
                                                    project_ch4_emissions;
    
                        debug(`Project CH4 emissions Year ${year}:`, {
                            stratum: stratum_index,
                            base_emission_factor: ghg_data.ch4_emission_factor_kg_ha_yr,
                            adjusted_emission_factor: ch4_emission_factor,
                            total_ch4_emissions: project_ch4_emissions
                        });
                    }
    
                    // N2O emissions calculation (typically lower in restored wetlands)
                    if (project_soil_N2O && ghg_data.n2o_emission_factor_kg_ha_yr) {
                        let n2o_emission_factor = ghg_data.n2o_emission_factor_kg_ha_yr;
    
                        // Apply anaerobic reduction factor for N2O in wetlands
                        if (ghg_data.anaerobic_fraction) {
                            n2o_emission_factor *= (1 - ghg_data.anaerobic_fraction * 0.8);
                        }
    
                        const project_n2o_emissions = ghg_data.area_ha *
                                                     n2o_emission_factor *
                                                     GWP_N2O / 1000; // Convert to tCO2eq
    
                        projectEmissionsArray[year] = (projectEmissionsArray[year] || 0) +
                                                    project_n2o_emissions;
                    }
                });
            }
        });
    }
    function processNETERR(baseline, project, netErrData, SOC_MAX,
                          emission_reduction_from_stock_loss, fire_reduction_premium,
                          FireReductionPremiumArray, NERRWE_Cap, NERRWE_Max, NERError,
                          allowable_uncert, buffer_percentage) {
    
        const netErrArray = {};
        const crediting_period = parseInt(netErrData.crediting_period_years);
    
        // Calculate emission reductions for each year with advanced methodology compliance
        for (let year = 1; year <= crediting_period; year++) {
            const baseline_emissions = baselineEmissionsArray[year] || 0;
            const project_emissions = projectEmissionsArray[year] || 0;
    
            // Core emission reduction calculation (VM0033 Equation 8.5.1)
            let emission_reduction = baseline_emissions - project_emissions;
    
            // Add SOC_MAX benefits if using total stock approach
            if (SOC_MAX && SOC_MAX[year]) {
                emission_reduction += SOC_MAX[year];
                debug(`SOC_MAX benefit Year ${year}:`, SOC_MAX[year]);
            }
    
            // Add emission reductions from stock loss approach
            if (emission_reduction_from_stock_loss && emission_reduction_from_stock_loss[year]) {
                emission_reduction += emission_reduction_from_stock_loss[year];
            }
    
            // Apply fire reduction premium if applicable (VM0033 Section 8.4)
            if (fire_reduction_premium && FireReductionPremiumArray[year]) {
                emission_reduction += FireReductionPremiumArray[year];
                debug(`Fire reduction premium Year ${year}:`, FireReductionPremiumArray[year]);
            }
    
            // Apply leakage deductions (VM0033 Section 8.3)
            if (netErrData.leakage_emissions && netErrData.leakage_emissions[year]) {
                emission_reduction -= netErrData.leakage_emissions[year];
            }
    
            // Store gross emission reduction
            netErrArray[year] = {
                gross_emission_reduction: emission_reduction,
                baseline_emissions: baseline_emissions,
                project_emissions: project_emissions
            };
    
            debug(`NER calculation Year ${year}:`, {
                baseline: baseline_emissions,
                project: project_emissions,
                gross_reduction: emission_reduction
            });
        }
    
        return netErrArray;
    }
    // Apply comprehensive uncertainty assessment (VM0033 Section 8.6)
    function applyUncertaintyAndBufferDeductions(netErrArray, NERError, allowable_uncert, buffer_percentage) {
        Object.keys(netErrArray).forEach(year => {
            const yearData = netErrArray[year];
            let net_emission_reduction = yearData.gross_emission_reduction;
    
            // Step 1: Apply measurement uncertainty deduction
            const measurement_uncertainty_deduction = net_emission_reduction * NERError / 100;
            net_emission_reduction -= measurement_uncertainty_deduction;
    
            // Step 2: Apply model uncertainty if specified
            if (allowable_uncert > 0) {
                const model_uncertainty_deduction = net_emission_reduction * allowable_uncert / 100;
                net_emission_reduction -= model_uncertainty_deduction;
            }
    
            // Step 3: Apply non-permanence buffer deduction
            const buffer_deduction = net_emission_reduction * buffer_percentage / 100;
            const final_creditable_emission_reduction = net_emission_reduction - buffer_deduction;
    
            // Step 4: Apply NERRWE cap if specified (VM0033 Section 8.5.2)
            let capped_emission_reduction = final_creditable_emission_reduction;
            if (NERRWE_Cap && final_creditable_emission_reduction > NERRWE_Cap) {
                capped_emission_reduction = NERRWE_Cap;
            }
    
            // Step 5: Apply NERRWE maximum if specified
            if (NERRWE_Max && capped_emission_reduction > NERRWE_Max) {
                capped_emission_reduction = NERRWE_Max;
            }
    
            // Update year data with all deductions
            yearData.measurement_uncertainty_deduction = measurement_uncertainty_deduction;
            yearData.model_uncertainty_deduction = model_uncertainty_deduction || 0;
            yearData.buffer_deduction = buffer_deduction;
            yearData.final_creditable_emission_reduction = capped_emission_reduction;
    
            debug(`Uncertainty analysis Year ${year}:`, {
                gross_reduction: yearData.gross_emission_reduction,
                measurement_uncertainty: measurement_uncertainty_deduction,
                model_uncertainty: yearData.model_uncertainty_deduction,
                buffer_deduction: buffer_deduction,
                final_creditable: capped_emission_reduction
            });
        });
    
        return netErrArray;
    }
    function processProjectEmissions(project, project_soil_CH4, project_soil_CH4_approach, GWP_CH4, project_soil_N2O, soil_N2O_approach, GWP_N2O, EF_N2O_Burn, EF_CH4_Burn, isPrescribedBurningOfBiomass) {
    
        // loop through every monitoring year -------------------------------------
        for (const yearRec of project.yearly_data_for_project_GHG_emissions ?? []) {
            const { year_t } = yearRec;
    
            // ---- per-stratum loop -------------------------------------------------
            for (const stratum of yearRec.annual_stratum_parameters ?? []) {
                const { stratum_i } = stratum;
    
                const sc = stratum.stratum_characteristics ?? {};
                const asl = stratum.annual_stratum_level_parameters ?? {};
    
                asl.delta_C_TREE_PROJ_i_t_ar_tool_14 = stratum.ar_tool_14.delta_C_TREE;
                asl.delta_C_SHRUB_PROJ_i_t_ar_tool_14 = stratum.ar_tool_14.delta_C_SHRUB;
                asl.ET_FC_I_t_ar_tool_5_WPS = stratum.ar_tool_05.ET_FC_y;
    
                if (asl.is_aboveground_tree_biomass !== true) {
                    asl.delta_C_TREE_PROJ_i_t_ar_tool_14 = 0;
                }
    
                if (asl.is_aboveground_non_tree_biomass !== true) {
                    asl.delta_C_SHRUB_PROJ_i_t_ar_tool_14 = 0;
                }
    
                asl.delta_C_WPS_tree_or_shrub_i_t = const_12_by_44 * (asl.delta_C_TREE_PROJ_i_t_ar_tool_14 + asl.delta_C_SHRUB_PROJ_i_t_ar_tool_14);
    
                if (asl.is_aboveground_non_tree_biomass !== true) {
                    asl.delta_C_WPS_herb_i_t = 0;
                }
    
                asl.delta_C_WPS_biomass_i_t = asl.delta_C_WPS_tree_or_shrub_i_t + asl.delta_C_WPS_herb_i_t;
    
                // Net GHG emissions from soil in baseline scenario
    
                if (asl.is_soil) {
                    const method = sc.co2_emissions_from_soil;
    
                    switch (method) {
                        case "Field-collected data":
                            asl.GHGWPS_soil_CO2_i_t = -(const_44_by_12 * asl.delta_C_WPS_soil_i_t);
                            break;
    
                        case "Proxies":
                            asl.GHGWPS_soil_CO2_i_t = asl.GHG_emission_proxy_GHGWPS_soil_CO2_i_t;
                            break;
    
                        default:
                            asl.GHGWPS_soil_CO2_i_t =
                                (asl.GHGWPS_insitu_CO2_i_t ?? 0) +
                                (asl.GHGWPS_eroded_CO2_i_t ?? 0) +
                                (asl.GHGWPS_excav_CO2_i_t ?? 0);
                    }
                } else {
                    asl.GHGWPS_soil_CO2_i_t = 0;
                }
    
                asl.Deduction_alloch_WPS = computeDeductionAllochProject({
                    project_soil_SOC: asl.is_soil,
                    soil_insitu_approach: sc.co2_emissions_from_soil,
                    soil_type: sc.soil_type_t0,
                    AK5: asl.GHGWPS_soil_CO2_i_t,
                    AL5: asl.is_soil ? asl.percentage_C_alloch_WPS : 0,
                    AR5: (asl.is_soil && sc.co2_emissions_from_soil === "Others") ? asl.GHGWPS_insitu_CO2_i_t : 0
                });
    
                // CH4 emissions from soil
    
                if (project_soil_CH4) {
                    const method = project_soil_CH4_approach;
    
                    switch (method) {
                        case "IPCC emission factors":
                            asl.GHGWPS_soil_CH4_i_t = asl.IPCC_emission_factor_ch4_WPS * GWP_CH4;
                            break;
    
                        case "Proxies":
                            asl.GHGWPS_soil_CH4_i_t = asl.GHG_emission_proxy_ch4_WPS * GWP_CH4;
                            break;
    
                        default:
                            asl.GHGWPS_soil_CH4_i_t = asl.CH4_WPS_soil_I_t * GWP_CH4;
                    }
                } else {
                    asl.GHGWPS_soil_CH4_i_t = 0;
                }
    
                // N2O emissions from soil
                if (project_soil_N2O) {
                    const method = soil_N2O_approach;
    
                    switch (method) {
                        case "IPCC emission factors":
                            asl.GHGWPS_soil_N2O_i_t = asl.IPCC_emission_factor_n2o_WPS * GWP_N2O;
                            break;
    
                        case "Proxies":
                            asl.GHGWPS_soil_N2O_i_t = asl.N2O_emission_proxy_WPS * GWP_N2O;
                            break;
    
                        default:
                            asl.GHGWPS_soil_N2O_i_t = asl.N2O_WPS_soil_I_t * GWP_N2O;
                    }
                } else {
                    asl.GHGWPS_soil_N2O_i_t = 0;
                }
    
                // GHGWPS-soil,i,t
                asl.GHGWPS_soil_i_t = asl.A_i_t * (asl.GHGWPS_soil_CO2_i_t - asl.Deduction_alloch_WPS + asl.GHGWPS_soil_CH4_i_t + asl.GHGWPS_soil_N2O_i_t);
    
                // Net non-CO2 emissions from prescribed burning of herbaceous biomass and shrub in project scenario
    
                if (asl.is_burning_of_biomass) {
                    asl.CO2_e_N2O_i_t = asl.biomassi_t * EF_N2O_Burn * GWP_N2O * Math.pow(10, -6) * asl.A_i_t;
                    asl.CO2_e_CH4_i_t = asl.biomassi_t * EF_CH4_Burn * GWP_CH4 * Math.pow(10, -6) * asl.A_i_t;
                    asl.GHGWPS_burn_i_t = asl.CO2_e_N2O_i_t + asl.CO2_e_CH4_i_t;
                } else {
                    asl.GHGWPS_burn_i_t = 0;
                }
    
                // 𝐺𝐻𝐺WPS−𝑓𝑢𝑒𝑙,𝑖,t
                if (asl.is_fossil_fuel_use) {
                    asl.GHGWPS_fuel_i_t = asl.ET_FC_I_t_ar_tool_5_WPS;
                } else {
                    asl.GHGWPS_fuel_i_t = 0;
                }
    
            }
    
    
            // ---- per-year calculations ------------------------------------------------------
            const sum_delta_C_WPS_biomass =
                yearRec.annual_stratum_parameters.reduce(
                    (acc, s) =>
                        acc +
                        (Number(
                            s.annual_stratum_level_parameters.delta_C_WPS_biomass_i_t
                        ) || 0),
                    0
                );
    
            yearRec.GHG_WPS_biomass = -(sum_delta_C_WPS_biomass * const_44_by_12);
    
            const sum_GHG_WPS_soil =
                yearRec.annual_stratum_parameters.reduce(
                    (acc, s) =>
                        acc +
                        (Number(
                            s.annual_stratum_level_parameters.GHGWPS_soil_i_t
                        ) || 0),
                    0
                );
    
            yearRec.GHG_WPS_soil = sum_GHG_WPS_soil;
    
            const sum_GHG_WPS_fuel =
                yearRec.annual_stratum_parameters.reduce(
                    (acc, s) =>
                        acc +
                        (Number(
                            s.annual_stratum_level_parameters.GHGWPS_fuel_i_t
                        ) || 0),
                    0
                );
    
            yearRec.GHG_WPS_fuel = sum_GHG_WPS_fuel;
    
            if (isPrescribedBurningOfBiomass) {
                const sum_GHG_WPS_burn =
                    yearRec.annual_stratum_parameters.reduce(
                        (acc, s) =>
                            acc +
                            (Number(
                                s.annual_stratum_level_parameters.GHGWPS_burn_i_t
                            ) || 0),
                        0
                    );
    
                yearRec.GHG_WPS_burn = sum_GHG_WPS_burn;
            } else {
                yearRec.GHG_WPS_burn = 0;
            }
    
            yearRec.GHG_WPS = yearRec.GHG_WPS_biomass + yearRec.GHG_WPS_soil + yearRec.GHG_WPS_fuel + yearRec.GHG_WPS_burn;
        }
    
        project.yearly_data_for_project_GHG_emissions.reduce((acc, rec) => {
            rec.GHG_WPS_biomass = acc + rec.GHG_WPS_biomass;
            return rec.GHG_WPS_biomass;
        }, 0);
    
        project.yearly_data_for_project_GHG_emissions.reduce((acc, rec) => {
            rec.GHG_WPS_soil = acc + rec.GHG_WPS_soil;
            return rec.GHG_WPS_soil;
        }, 0);
    
        project.yearly_data_for_project_GHG_emissions.reduce((acc, rec) => {
            rec.GHG_WPS_fuel = acc + rec.GHG_WPS_fuel;
            return rec.GHG_WPS_fuel;
        }, 0);
    
        project.yearly_data_for_project_GHG_emissions.reduce((acc, rec) => {
            rec.GHG_WPS_burn = acc + rec.GHG_WPS_burn;
            return rec.GHG_WPS_burn;
        }, 0);
    
        project.yearly_data_for_project_GHG_emissions.reduce((acc, rec) => {
            rec.GHG_WPS = (rec.GHG_WPS_biomass + rec.GHG_WPS_soil + rec.GHG_WPS_fuel + rec.GHG_WPS_burn) * -1;
            return rec.GHG_WPS;
        }, 0);
    }
    asl.delta_C_TREE_PROJ_i_t_ar_tool_14 = stratum.ar_tool_14.delta_C_TREE;
    asl.delta_C_SHRUB_PROJ_i_t_ar_tool_14 = stratum.ar_tool_14.delta_C_SHRUB;
    asl.ET_FC_I_t_ar_tool_5_WPS = stratum.ar_tool_05.ET_FC_y;
    if (asl.is_aboveground_tree_biomass !== true) {
        asl.delta_C_TREE_PROJ_i_t_ar_tool_14 = 0;
    }
    
    if (asl.is_aboveground_non_tree_biomass !== true) {
        asl.delta_C_SHRUB_PROJ_i_t_ar_tool_14 = 0;
    }
    if (asl.is_soil) {
        const method = sc.co2_emissions_from_soil;
    
        switch (method) {
            case "Field-collected data":
                asl.GHGWPS_soil_CO2_i_t = -(const_44_by_12 * asl.delta_C_WPS_soil_i_t);
                break;
    
            case "Proxies":
                asl.GHGWPS_soil_CO2_i_t = asl.GHG_emission_proxy_GHGWPS_soil_CO2_i_t;
                break;
    
            default:
                asl.GHGWPS_soil_CO2_i_t =
                    (asl.GHGWPS_insitu_CO2_i_t ?? 0) +
                    (asl.GHGWPS_eroded_CO2_i_t ?? 0) +
                    (asl.GHGWPS_excav_CO2_i_t ?? 0);
        }
    } else {
        asl.GHGWPS_soil_CO2_i_t = 0;
    }
    // CH4 emissions from soil
    if (project_soil_CH4) {
        const method = project_soil_CH4_approach;
    
        switch (method) {
            case "IPCC emission factors":
                asl.GHGWPS_soil_CH4_i_t = asl.IPCC_emission_factor_ch4_WPS * GWP_CH4;
                break;
    
            case "Proxies":
                asl.GHGWPS_soil_CH4_i_t = asl.GHG_emission_proxy_ch4_WPS * GWP_CH4;
                break;
    
            default:
                asl.GHGWPS_soil_CH4_i_t = asl.CH4_WPS_soil_I_t * GWP_CH4;
        }
    } else {
        asl.GHGWPS_soil_CH4_i_t = 0;
    }
    if (asl.is_burning_of_biomass) {
        asl.CO2_e_N2O_i_t = asl.biomassi_t * EF_N2O_Burn * GWP_N2O * Math.pow(10, -6) * asl.A_i_t;
        asl.CO2_e_CH4_i_t = asl.biomassi_t * EF_CH4_Burn * GWP_CH4 * Math.pow(10, -6) * asl.A_i_t;
        asl.GHGWPS_burn_i_t = asl.CO2_e_N2O_i_t + asl.CO2_e_CH4_i_t;
    } else {
        asl.GHGWPS_burn_i_t = 0;
    }
    yearRec.GHG_WPS = yearRec.GHG_WPS_biomass + yearRec.GHG_WPS_soil + yearRec.GHG_WPS_fuel + yearRec.GHG_WPS_burn;
    function processNETERR(baseline, project, netErrData, SOC_MAX, emission_reduction_from_stock_loss, fire_reduction_premium, FireReductionPremiumArray, NERRWE_Cap, NERRWE_Max, NERError, allowable_uncert, buffer_percentage) {
        /* ───────── meta kept from original array (if present) ──────── */
        const META = {
            type: netErrData.net_ERR_calculation_per_year?.[0]?.type,
            '@context': netErrData.net_ERR_calculation_per_year?.[0]?.['@context'] ?? [],
        };
    
        /* ───────── aggregate baseline ───────── */
        const perYear = new Map();                      // key = year_t
    
        for (const yr of baseline.yearly_data_for_baseline_GHG_emissions ?? []) {
            const total = (yr.annual_stratum_parameters ?? []).reduce(
                (a, s) =>
                    a +
                    +(s.annual_stratum_level_parameters?.GHGBSL_soil_CO2_i_t ?? 0) *
                    +(s.annual_stratum_level_parameters?.A_i_t ?? 0),
                0,
            );
    
            const total_GHG_BSL_SOIL_DEDUCTED_CO2_i_t = (yr.annual_stratum_parameters ?? []).reduce(
                (a, s) => {
                    const ghgbsl_soil_co2 = +(s.annual_stratum_level_parameters?.GHGBSL_soil_CO2_i_t ?? 0);
                    const deduction_alloch = +(s.annual_stratum_level_parameters?.Deduction_alloch ?? 0);
                    const a_i_t = +(s.annual_stratum_level_parameters?.A_i_t ?? 0);
                    return a + (ghgbsl_soil_co2 - deduction_alloch) * a_i_t;
                },
                0,
            );
    
            perYear.set(yr.year_t, {
                year_t: yr.year_t,
                sumation_GHG_BSL_soil_CO2_i_A_i: total,
                sumation_GHG_WPS_soil_CO2_i_A_i: 0,        // will be filled next loop
                GHG_BSL_SOIL_DEDUCTED_CO2_i_t: total_GHG_BSL_SOIL_DEDUCTED_CO2_i_t
            });
        }
    
        /* ───────── aggregate project ───────── */
        for (const yr of project.yearly_data_for_project_GHG_emissions ?? []) {
            const total = (yr.annual_stratum_parameters ?? []).reduce(
                (a, s) =>
                    a +
                    +(s.annual_stratum_level_parameters?.GHGWPS_soil_CO2_i_t ?? 0) *
                    +(s.annual_stratum_level_parameters?.A_i_t ?? 0),
                0,
            );
    
            const total_GHG_WPS_SOIL_DEDUCTED_CO2_i_t = (yr.annual_stratum_parameters ?? []).reduce(
                (a, s) => {
                    const ghgwps_soil_co2 = +(s.annual_stratum_level_parameters?.GHGWPS_soil_CO2_i_t ?? 0);
                    const deduction_alloch_wps = +(s.annual_stratum_level_parameters?.Deduction_alloch_WPS ?? 0);
                    const a_i_t = +(s.annual_stratum_level_parameters?.A_i_t ?? 0);
                    return a + (ghgwps_soil_co2 - deduction_alloch_wps) * a_i_t;
                },
                0,
            );
    
            if (!perYear.has(yr.year_t)) {
                perYear.set(yr.year_t, {
                    year_t: yr.year_t,
                    sumation_GHG_BSL_soil_CO2_i_A_i: 0,
                    sumation_GHG_WPS_soil_CO2_i_A_i: 0,
                    GHG_WPS_SOIL_DEDUCTED_CO2_i_t: 0,
                });
            }
            perYear.get(yr.year_t).sumation_GHG_WPS_soil_CO2_i_A_i = total;
            perYear.get(yr.year_t).GHG_WPS_SOIL_DEDUCTED_CO2_i_t = total_GHG_WPS_SOIL_DEDUCTED_CO2_i_t;
        }
    
        /* ───────── cumulative sums + final array ───────── */
        let cumBSL = 0;
        let cumWPS = 0;
        let cumBSL_DEDUCTED = 0;
        let cumWPS_DEDUCTED = 0;
    
        netErrData.net_ERR_calculation_per_year = [...perYear.values()]
            .sort((a, b) => a.year_t - b.year_t)
            .map(rec => {
                cumBSL += rec.sumation_GHG_BSL_soil_CO2_i_A_i;
                cumWPS += rec.sumation_GHG_WPS_soil_CO2_i_A_i;
                cumBSL_DEDUCTED += rec.GHG_BSL_SOIL_DEDUCTED_CO2_i_t;
                cumWPS_DEDUCTED += rec.GHG_WPS_SOIL_DEDUCTED_CO2_i_t;
                return {
                    year_t: rec.year_t,
                    sumation_GHG_BSL_soil_CO2_i_A_i: cumBSL,
                    sumation_GHG_WPS_soil_CO2_i_A_i: cumWPS,
                    GHG_BSL_SOIL_DEDUCTED_CO2_i_t: cumBSL_DEDUCTED,
                    GHG_WPS_SOIL_DEDUCTED_CO2_i_t: cumWPS_DEDUCTED,
                    ...META,                       // ONLY type + @context copied in
                };
            });
    
        if (emission_reduction_from_stock_loss) {
            netErrData.net_ERR_calculation_per_year.map(rec => {
                const temp_deduction = (rec.sumation_GHG_BSL_soil_CO2_i_A_i - rec.sumation_GHG_WPS_soil_CO2_i_A_i - SOC_MAX);
                rec.GHG_WPS_soil_deduction = temp_deduction > 0 ? temp_deduction : 0;
                return rec;
            }
            );
        } else {
            netErrData.net_ERR_calculation_per_year.map(rec => {
                rec.GHG_WPS_soil_deduction = 0;
                return rec;
            }
            );
        }
    
        if (fire_reduction_premium) {
            netErrData.net_ERR_calculation_per_year.map(rec => {
                rec.FRP = getFireReductionPremiumPerYear(FireReductionPremiumArray, rec.year_t);
                return rec;
            }
            );
        }
        else {
            netErrData.net_ERR_calculation_per_year.map(rec => {
                rec.FRP = 0;
                return rec;
            }
            );
        }
    
        netErrData.net_ERR_calculation_per_year.map(rec => {
            rec.GHG_LK = 0;
            return rec;
        });
    
        netErrData.net_ERR_calculation_per_year.map(rec => {
            rec.NERRWE = getGHGBSL(baseline.yearly_data_for_baseline_GHG_emissions, rec.year_t) + getGHGWPS(project.yearly_data_for_project_GHG_emissions, rec.year_t) + rec.FRP - rec.GHG_LK - rec.GHG_WPS_soil_deduction;
            return rec;
        });
    
        netErrData.net_ERR_calculation_per_year.map(rec => {
            if (NERRWE_Cap) {
                rec.NERRWE_capped = rec.NERRWE <= NERRWE_Max ? rec.NERRWE : NERRWE_Max;
                rec.NER_t = rec.NERRWE_capped;
                return rec;
            } else {
                rec.NERRWE_capped = rec.NERRWE;
                rec.NER_t = rec.NERRWE;
                return rec;
            }
        });
    
        netErrData.net_ERR_calculation_per_year.map(rec => {
            rec.adjusted_NER_t = rec.NER_t * (1 - NERError + allowable_uncert);
            return rec;
        }
        );
    
        netErrData.net_ERR_calculation_per_year.map(rec => {
            rec.NER_stock_t = (rec.GHG_BSL_SOIL_DEDUCTED_CO2_i_t + getGHGBSLBiomass(baseline.yearly_data_for_baseline_GHG_emissions, rec.year_t)) - (rec.GHG_WPS_SOIL_DEDUCTED_CO2_i_t + getGHGWPSBiomass(project.yearly_data_for_project_GHG_emissions, rec.year_t));
            return rec;
        }
        );
    
        // First, sort by year_t (ascending)
        const netErrArr = netErrData.net_ERR_calculation_per_year.sort((a, b) => a.year_t - b.year_t);
    
        netErrArr.forEach((rec, idx, arr) => {
            if (idx === 0) {
                rec.buffer_deduction = rec.NER_stock_t * buffer_percentage;
            } else {
                const prevRec = arr[idx - 1];
                rec.buffer_deduction = calculateNetERRChange(
                    rec.adjusted_NER_t,
                    prevRec.adjusted_NER_t,
                    rec.NER_stock_t,
                    prevRec.NER_stock_t,
                    buffer_percentage
                );
            }
        });
    
    
        netErrArr.forEach((rec, idx, arr) => {
            if (idx === 0) {
                rec.VCU = rec.adjusted_NER_t - rec.buffer_deduction;
            } else {
                const prevRec = arr[idx - 1];
                rec.VCU = calculateNetVCU(
                    rec.adjusted_NER_t,
                    prevRec.adjusted_NER_t,
                    rec.buffer_deduction
                );
            }
        });
    
    
        netErrData.total_VCU_per_instance = calculateTotalVCUPerInstance(netErrData);
    
    }
    const total = (yr.annual_stratum_parameters ?? []).reduce(
        (a, s) =>
            a +
            +(s.annual_stratum_level_parameters?.GHGBSL_soil_CO2_i_t ?? 0) *
            +(s.annual_stratum_level_parameters?.A_i_t ?? 0),
        0,
    );
    cumBSL += rec.sumation_GHG_BSL_soil_CO2_i_A_i;
    cumWPS += rec.sumation_GHG_WPS_soil_CO2_i_A_i;
    cumBSL_DEDUCTED += rec.GHG_BSL_SOIL_DEDUCTED_CO2_i_t;
    cumWPS_DEDUCTED += rec.GHG_WPS_SOIL_DEDUCTED_CO2_i_t;
    if (emission_reduction_from_stock_loss) {
        netErrData.net_ERR_calculation_per_year.map(rec => {
            const temp_deduction = (rec.sumation_GHG_BSL_soil_CO2_i_A_i - rec.sumation_GHG_WPS_soil_CO2_i_A_i - SOC_MAX);
            rec.GHG_WPS_soil_deduction = temp_deduction > 0 ? temp_deduction : 0;
            return rec;
        }
        );
    }
    if (fire_reduction_premium) {
        netErrData.net_ERR_calculation_per_year.map(rec => {
            rec.FRP = getFireReductionPremiumPerYear(FireReductionPremiumArray, rec.year_t);
            return rec;
        }
        );
    }
    rec.NERRWE = getGHGBSL(baseline.yearly_data_for_baseline_GHG_emissions, rec.year_t) + getGHGWPS(project.yearly_data_for_project_GHG_emissions, rec.year_t) + rec.FRP - rec.GHG_LK - rec.GHG_WPS_soil_deduction;
    if (NERRWE_Cap) {
        rec.NERRWE_capped = rec.NERRWE <= NERRWE_Max ? rec.NERRWE : NERRWE_Max;
        rec.NER_t = rec.NERRWE_capped;
    } else {
        rec.NERRWE_capped = rec.NERRWE;
        rec.NER_t = rec.NERRWE;
    }
    rec.adjusted_NER_t = rec.NER_t * (1 - NERError + allowable_uncert);
    if (idx === 0) {
        rec.buffer_deduction = rec.NER_stock_t * buffer_percentage;
    } else {
        const prevRec = arr[idx - 1];
        rec.buffer_deduction = calculateNetERRChange(
            rec.adjusted_NER_t,
            prevRec.adjusted_NER_t,
            rec.NER_stock_t,
            prevRec.NER_stock_t,
            buffer_percentage
        );
    }
    if (idx === 0) {
        rec.VCU = rec.adjusted_NER_t - rec.buffer_deduction;
    } else {
        const prevRec = arr[idx - 1];
        rec.VCU = calculateNetVCU(
            rec.adjusted_NER_t,
            prevRec.adjusted_NER_t,
            rec.buffer_deduction
        );
    }