Back to Articles
October 29, 2025
8 min read

Integrating Pendo Analytics with OneTrust Cookie Consent in a .NET Razor Pages Portal

Real enterprise integration work: retrofitting Pendo analytics and OneTrust cookie consent into a production portal with 40+ domains, Angular microfrontends, and testing constraints. The technical patterns that worked and the challenges the documentation didn't mention.

Development.NETAnalyticsOneTrust
Text-to-speech not supported in this browser

When the business wants analytics and legal says "not without proper consent management," you end up retrofitting cookie consent into an existing portal. That's exactly what happened when we needed to add Pendo analytics to our customer portal at Global Payments while staying compliant with privacy regulations.

This wasn't a greenfield project where you can architect everything perfectly from the start. This was real enterprise work: 40+ domains across different regions and environments, Angular microfrontends running in iframes, and a testing workflow that basically required merging code before you could verify it worked.

Here's what that integration actually looked like, including the parts that made it harder than the documentation suggested.

The Basic Setup

OneTrust provides a clean pattern for blocking scripts until the user consents. Instead of conditionally loading scripts with JavaScript callbacks, you let OneTrust control script execution directly by setting the script type to text/plain and adding an optanon-category class.

In your _Layout.cshtml, the OneTrust SDK loads first:

HTML

Then the Pendo script uses the blocking pattern:

HTML

The key is type="text/plain" and class="optanon-category-C0002". The browser won't execute a script with type text/plain, so Pendo stays blocked by default. When the user consents to analytics cookies (category C0002 in OneTrust's taxonomy), OneTrust automatically changes the type attribute to text/javascript, and the browser executes the script.

If the user later revokes consent through OneTrust's preference modal, OneTrust switches it back to text/plain. We require a page refresh for consent changes to take effect, which keeps the implementation straightforward. You can verify Pendo's state in Chrome DevTools by running pendo.validateInstall().

We handled Pendo's visitor and account metadata through middleware rather than inline script initialization, which kept the layout file cleaner and made the metadata logic testable.

The Multi-Domain Reality

Here's where it got interesting. Our portal spans 40+ domains across different regional portals and environments. OneTrust needs to know which domains share consent settings, which means configuring domain groups.

This was part OneTrust admin panel work and part API work. I found myself repeatedly republishing scripts through their admin interface while making API calls to create and update domain groups. The API documentation at https://developer.onetrust.com/onetrust/reference/createdomaingroup has the details, but the workflow became: configure in the UI, publish, make API calls, test, repeat.

We ended up with two main domain groups: prod and non-prod. This kept things manageable, but it also meant that without a domain group properly configured for the domain you're testing on, it's basically impossible to verify the integration works.

The Angular Microfrontend Problem

The portal includes Angular microfrontends running in iframes, which added another layer of complexity. These apps needed to respect the same consent choices, but they're isolated in their own execution contexts.

The solution was straightforward in concept: the parent Razor page reads the OneTrust cookie and passes a hasConsent: boolean property down to each iframe. The Angular apps then conditionally load Pendo based on that boolean value.

The implementation was less straightforward. I hadn't touched Angular in two years, and local testing of iframe communication is difficult enough without adding consent management into the mix. I brought in help from someone more current with Angular, and we adopted the same "make changes and pray" workflow I was already using for the Razor side.

The Testing Problem

Local testing was more trouble than it was worth. OneTrust's domain groups don't play nicely with localhost, and setting up a realistic test environment locally would have taken longer than just testing in a real environment.

So the workflow became: make changes, open a PR, get it merged, deploy to a feature branch environment, then actually verify it worked. The feedback loop was about five minutes from merge to verification, which isn't terrible, but it meant being thoughtful about what you committed. You couldn't just iterate rapidly in a local environment like you normally would.

This felt uncomfortable at first. We're trained to test locally, to have tight feedback loops, to know something works before we push it. But sometimes enterprise integration work doesn't allow for that, and you have to adjust. The key is being deliberate about your changes and understanding the system well enough to have confidence before you commit.

What I'd Do Differently

If I were doing this again, I'd push harder for a dedicated integration environment with proper domain configuration earlier in the process. The "merge and hope" workflow got us to done, but it also meant more round trips and more time waiting to verify changes.

I'd also document the domain group requirements more explicitly upfront. When you're configuring 40+ domains across multiple environments, having a clear map of what goes where saves a lot of back and forth with the OneTrust admin panel.

The Takeaway

Integrating third-party tools in a real enterprise environment is messier than the documentation suggests. You're dealing with multi-domain configurations, isolated execution contexts, API quirks, and testing constraints that don't exist in greenfield projects or simple examples.

The technical pattern for cookie-based script blocking is straightforward. OneTrust's optanon-category approach handles the conditional loading cleanly without custom JavaScript. The work is in navigating everything around that pattern: the domain groups, the iframe communication, the testing limitations, the coordination between different parts of the system.

If you're doing something similar, expect it to take longer than you think. Plan for the testing constraints. Document your domain group configuration. And remember that sometimes the path to shipping working software isn't as clean as we'd like, and that's okay. At the end of the day, the goal is to get compliant analytics running in production, not to have a perfect local development experience.

Share This Article

Found this helpful? Share it with your network to help others discover it too.

Stay Updated

Get notified when I publish new articles about development, technology, and the intersection of faith and code. No spam, just thoughtful content.

Thanks for reading!

Questions or feedback? Feel free to reach out.