Testing the libraries

Before starting, if you don’t know Braden, you should check out his newsletter. He shares insightful articles on computational development in the AEC space. As with the previous article, Braden continues to collaborate on this series, contributing his sharp insights and experience throughout.

Throughout this series, we’ve built a professional-grade library infrastructure, including structured projects, multi-platform builds, comprehensive testing, version control workflows, and automated deployment. We’ve created tools that work reliably across Revit versions, Rhino iterations, and deployment channels. But there’s a question we haven’t answered: How do you know if anyone actually uses what you’ve built?

More importantly, when you’re developing computational design libraries within an enterprise environment, how do you demonstrate the value of your work to leadership? And how do you ensure that proprietary tools remain within your organisation’s security boundaries?

These aren’t just technical questions; they’re strategic ones that determine whether computational design teams get continued investment, whether libraries evolve based on actual usage patterns rather than assumptions, and whether enterprise IT approves deployment of your tools in production environments.

This article addresses the final pieces of professional library development: measuring impact through telemetry and securing access through authentication.

The measurement challenge: outputs vs outcomes

In traditional software development, measuring usage is straightforward: tracking page views, API calls, and user sessions. However, computational design libraries present a fundamental problem: the unit of functionality isn’t a page, but rather a node. How do you measure the value of a single Dynamo node or Grasshopper component?

The most common thing teams do is measure outputs rather than outcomes. Outputs are what the tool delivers:” Our library was used 2,500 times this quarter”, or “45 users loaded the library.” Outcomes are what the tool achieves: “automation saved 850 hours of manual work” or “projects completed under budget increased by 15%.” The difference matters enormously when you’re trying to justify continued investment.

Consider a geometry transformation node used in 500 scripts across your organisation, embedded in automation workflows, saving hundreds of engineering hours monthly. From pure execution counts, it might fire thousands of times or just once per script run, which number represents its value? Or a complex structural analysis component used by exactly three engineers on one infrastructure project, running a few dozen times total but enabling workflows that weren’t possible before, potentially saving weeks of manual calculations on a multi-million-dollar project. Traditional metrics such as execution counts, daily active users, and session duration measure outputs. They don’t capture the problems solved, the workflows enabled, the automation provided.

Yet without measurement, you’re flying blind. The critical first step is to engage in stakeholder conversations with project managers, leadership, and finance teams to understand which outcomes actually matter to the business. What metrics prove value in your organisation? Cost reduction? Time savings? Project success rates? Quality improvements? These conversations define what to measure and align telemetry with business strategy rather than just capturing technical metrics. The challenge then becomes tracking events that reveal those specific outcomes, understanding which workflows drive business value, identifying where errors block productivity, and determining which software versions deserve prioritisation. This requires selective instrumentation that captures the signal without being overwhelmed by noise.

The technical reality: Microsoft Application Insights

While many telemetry platforms exist, custom logging solutions, open-source platforms, and cloud analytics services, Microsoft Application Insights has become the pragmatic choice for enterprise environments already using Azure infrastructure. The selection isn’t about Application Insights being inherently superior; it’s about fitting into existing organisational ecosystems. When your organisation already uses Azure Active Directory for authentication and Azure for infrastructure, Application Insights integrates seamlessly with existing identity and security policies. Telemetry data remains within your organisation’s Azure subscription, addressing compliance concerns about sensitive project data leaving the organisation. IT teams already familiar with Azure monitoring can manage telemetry infrastructure without needing to learn entirely new platforms. The Kusto Query Language enables sophisticated analysis of usage patterns, error trends, and performance characteristics.
The implementation pattern is straightforward: initialise a telemetry client when the library loads, track events and exceptions as they occur, and let Application Insights aggregate the data for later analysis. When your Dynamo or Grasshopper library loads in the host application, you initialise a telemetry client with a connection string pointing to your Application Insights resource and custom properties that get attached to every telemetry event: host application name, version, and library version. For Dynamo running in Revit 2024, you’d capture “Dynamo Revit”, “2024”, and the Dynamo version. For Grasshopper in Rhino 8, you’d capture “Rhino 8” and the specific Rhino build.

var telemetryClient = TelemetryClientFactory.CreateClient(
  connectionString: "YourApplicationInsightsConnectionString",
  applicationName: "YourLibraryName",
  customPropertyFactory: GetTelemetryProperties);

private IDictionary<string, string=""> GetTelemetryProperties(){    
  return new Dictionary<string, string="">{        
    { "HostApplication", "Dynamo Revit" },
    { "HostApplicationVersion", "2024" }, 
    { "LibraryVersion", "2.5.0" } 
};}

Throughout your library’s lifecycle, you track meaningful events such as the library starting, stopping, a user authenticating, authentication failing, critical operations being completed, and errors occurring. Application Insights receives these events, stores them, and provides analysis capabilities. You can query for patterns like “how many users loaded the library this week?”, “Which host application versions have the highest error rates?”, or “What percentage of users successfully authenticate on first try?”

The silent tracker pattern: Telemetry without assumptions

One implementation detail matters significantly: not every environment allows telemetry. Development machines, air-gapped networks, highly restricted IT environments—these scenarios exist, and your library must handle them gracefully.

The solution is a **silent tracker** pattern. By default, your library uses a tracker that implements the telemetry interface but doesn’t actually send data anywhere. It’s a no-op tracker that satisfies the telemetry API contract without requiring network connectivity or external services.

public static class Usage{
    private static IUsageTracker _client = new SilentUsageTracker();    
public static IUsageTracker Client{        
    get => _client;        
    set => _client = value ?? new SilentUsageTracker();    
}}

When your library initialises in an environment that supports telemetry, you swap in the real tracker. If initialisation fails (due to network issues, a misconfigured connection string, or blocked endpoints), you fall back to the silent tracker. The library continues to work normally; telemetry becomes optional, not a dependency.

This pattern has a secondary benefit: during development, you can use the silent tracker to avoid polluting production telemetry with developer testing activities. Alternatively, you can point to a separate Application Insights instance for development to validate telemetry implementation without affecting production data.

Selective measurement: Signal over noise

The pattern that emerged from production deployments is priority-based telemetry. Not all nodes matter equally. A typical Dynamo script contains approximately 150 nodes, but tracking each execution generates millions of unnecessary events. The solution: basic utility nodes (list operations, mathematical functions, simple transformations) are counted in aggregate metrics without individual events. High-priority nodes tied to business outcomes, complex computational operations, specialised domain knowledge implementations, and workflow-enabling features send detailed telemetry. The classification is strategic, not technical: a geometrically complex node might be of low priority if it serves only a basic utility. In contrast, a conceptually simple node may be of high priority if it represents a critical business workflow.

[NodeCategory("Analysis.Structural")]
[TelemetryPriority(TelemetryPriority.High)]
public class FiniteElementAnalysis : NodeModel
{// Sends detailed telemetry when executed}

[NodeCategory("List.Basic")]
[TelemetryPriority(TelemetryPriority.Low)]
public class FilterList : NodeModel
{// Counted in aggregate, no individual events}

Implementation: a library with 200 nodes might designate 30-40 as high-priority. A script with 150 nodes now generates 5-10 events (library start, authentication, high-priority features, library stop) instead of 150. The signal reveals which workflows drive business value, where errors block productivity, and what features justify continued investment.

Translating telemetry into business value

The framework combines **adoption metrics** (growth trends across teams), **performance metrics** (hours saved and costs reduced), **delivery efficiency** (issue resolution speed), and **user satisfaction** (Net Promoter Scores and retention). The narrative this enables: “Adoption grew 30% quarter-over-quarter across 60% of engineering. Projects using the library completed 15% faster. User satisfaction remained strong with a Net Promoter Score of 45.”

But the most powerful capability emerges when you **associate library usage with specific projects**. By working with IT to connect telemetry with CRM or project management systems, you can track which projects use the library and correlate usage with project outcomes, budgets, timelines, and delivery success. This project-level attribution transforms generic statements like “the library saved time” into specific evidence: “On Project X, the library automated 200 hours of modelling work, contributing to 12% under-budget delivery.” When stakeholders see library impact tied directly to their projects and business metrics, justifying continued investment becomes straightforward. This is why the stakeholder conversations matter; they define which business outcomes to track and how to connect library usage to those outcomes through IT system integration.

Authentication: The enterprise requirement

Telemetry tells you what’s happening and authentication controls who can access and make changes. In enterprise AEC environments with proprietary methodologies and client confidentiality requirements, controlling access isn’t optional; it’s how you ensure proprietary tools remain within organisational boundaries, satisfy IT security requirements, and enable enterprise deployment with confidence.

Just as Application Insights is pragmatic for telemetry in Azure-based enterprises, the Microsoft Authentication Library (MSAL) is the standard for authentication in organisations using Azure Active Directory (now Entra ID). MSAL authenticates users against your organisation’s identity provider, obtains access tokens, and verifies user identity. For computational design libraries, this typically means users authenticate once—often automatically using Windows credentials via Windows Authentication Broker—and the library validates their identity against the corporate directory.

The flow works seamlessly when properly configured. When the library loads in Revit or Rhino, MSAL first checks for cached credentials. If a valid token exists from a previous session, authentication is complete before the user notices. Suppose no cached token exists and Windows Authentication Broker is enabled with corporate credentials. In that case, authentication happens silently—the user’s Windows identity is verified against Azure AD and an access token is issued without any prompts. Only when silent authentication fails (the user is not logged in with a corporate account, the token has expired, or additional consent is required) does MSAL display an authentication dialogue. Once authenticated, tokens are securely cached on the user’s machine, eliminating the need for repeated prompts. When tokens expire, MSAL automatically refreshes them or re-prompts as needed.

public static async Task AcquireTokenAsync(IntPtr windowHandle){    
  var application = await Application.Value;    
  var account = (await application.GetAccountsAsync()).FirstOrDefault()                  
    ?? PublicClientApplication.OperatingSystemAccount;    
  try { return await application.AcquireTokenSilent(scopes, account).ExecuteAsync();}
  catch (MsalUiRequiredException){
    return await application.AcquireTokenInteractive(scopes)            
                 .WithAccount(account)            
                 .WithParentActivityOrWindow(windowHandle)            
                 .ExecuteAsync();}}

Authentication introduces friction every step between “user opens Revit” and “user starts working”, risking frustration and resistance. The balance comes from designing authentication that satisfies IT security requirements while minimising user impact. Silent authentication is first leveraged, utilising the Windows Authentication Broker, so users never see prompts. Token caching ensures authentication occurs only once per session or less. Clear, actionable error messages when authentication fails, not “Authentication failed” but “You need to be connected to the corporate network to use this library. Please connect to VPN and restart Revit.” Graceful degradation where possible, allowing basic functionality without authentication while requiring it for advanced features. And coordination with IT to ensure the library’s Azure AD application registration has appropriate permissions, users are in the correct security groups, and network requirements are documented.

From IT’s perspective, authentication isn’t about keeping users out; it’s about knowing who’s in and providing audit trails. Users are authenticated against Azure AD, not local credentials. When someone leaves the company, disabling their Azure AD account immediately revokes access to the library. Authentication events are logged via telemetry or Azure AD logs, satisfying compliance requirements. MSAL handles token storage securely, preventing common mistakes such as storing credentials in plain text. By using Azure AD, the library integrates with existing identity infrastructure, eliminating the need for separate credential management and additional user databases.

When combined with telemetry, authentication enables powerful deployment workflows. Orkestra synchronises packages to authenticated workstations, ensuring only authorised users receive proprietary tools. Telemetry tracks the rollout of your query by using Application Insights to see adoption of new versions, error rates, and performance improvements. Version-specific insights enable the comparison of releases, validating that updates improve the user experience. When permissible under privacy policies, authenticated user data enables targeted communication: “Users still on v2.3.0, please update to v2.5.0 for critical bug fixes.” Authentication and telemetry aren’t separate concerns; they’re complementary systems that enable informed and secure library management.

Privacy and ethical considerations

Privacy isn’t an afterthought; it’s a prerequisite for enterprise deployment approval. The principle is straightforward: collect only what serves a clear purpose and never capture sensitive data. Telemetry tracks usage patterns (which features execute, when, on which platforms) but deliberately excludes user-entered values, generated geometry, analysis results, or any intellectual property. The line is clear: measuring usage, not extracting work product.

Implementation requires close coordination with IT security and legal teams from the start. They define retention policies, approve data categories, establish access controls, and ensure compliance with organisational privacy policies and regulatory requirements. This collaboration isn’t an obstacle; it’s what enables deployment. One practical approach embeds transparency directly in the library, providing users with clear information about what’s collected and what’s explicitly excluded. When you can demonstrate thoughtful data governance with clear boundaries and defined purposes, approval becomes straightforward. Privacy requirements don’t prevent measurement; they ensure measurement happens responsibly within organisational standards.

Practical implementation: Where to instrument

Implementation focuses on library initialisation (where you track startup context), authentication events (success/failure), and high-value feature execution. Complexity classification adds valuable context—nodes tagged by computational complexity reveal patterns like “high-complexity nodes account for 15% of usage but 60% of errors, suggesting better error handling is needed.”

[NodeCategory("Analysis.Structural")]
[ComplexityLevel(NodeComplexity.High)]
public class FiniteElementAnalysis : NodeModel
{ 
  public override void Execute(){        
    Usage.TrackEvent("HighComplexity.NodeExecuted", new Dictionary<string, string="">{
      { "NodeName", this.Name },
      { "ComplexityLevel", "High" },
      { "Category", "Analysis.Structural" } 
});        
// ... implementation }}</string,>

Architectural patterns from production deployments

The implementations that evolved in large organisations reveal something interesting: a basic telemetry library sends events to Application Insights, which works for small teams but breaks down at enterprise scale. What emerges instead are sophisticated multi-system architectures that address data integration, governance, cost optimisation, and privacy requirements simultaneously.

**Integrating with enterprise metrics systems** became necessary when IT leadership wanted to compare computational design library adoption against all digital tools in the organisation. The pattern that works has your library sending usage events through a standardised REST API (secured via Key Vault or Managed Identity) to a corporate metrics dashboard, Power BI, Tableau, or custom analytics platforms. This API accepts formatted payloads, including the timestamp, user identifier from the corporate directory, tool identifier, usage context (such as project codes), and machine identifier for license tracking. The payoff is positioning your library alongside established tools: “Our computational design library serves 150 engineers, comparable to specialised FEA software at 120 users, and growing faster quarter-over-quarter.” Suddenly, you’re not a niche experiment, you’re enterprise infrastructure.

// After local telemetry, report to enterprise metrics API
await enterpriseMetricsClient.ReportUsageAsync(new EnterpriseMetricsEvent{    
  Date = DateTime.UtcNow,    
  UserId = authenticatedUser.Id,    
  ToolIdentifier = "ComputationalDesign.Library",    
  FeatureUsed = "ParametricBridge.Analysis",    
  ProjectCode = currentProject.Code,    
  MachineId = Environment.MachineName});

**Deferred data enrichment** solves a fundamental tension: you need rich context for analytics (project names, clients, managers, sectors) but can’t slow down the user experience with database queries at runtime. The solution separates collection from enrichment. During execution, the library captures lightweight data, including project code, feature used, and timestamp, and sends it immediately to Application Insights: no database lookups, no API calls to slow systems, and zero user-facing latency. Then a separate service (Azure Logic App, Lambda function, scheduled job) runs periodically hourly, daily, weekly to pull recent telemetry, enrich each project code by querying corporate CRM/ERP systems for project details, calculate derived complexity metrics and time savings estimates, and push enriched data to Azure Tables or directly update Power BI datasets. This pattern emerged from a practical constraint: security teams will not grant libraries direct database access, but analytics teams require enriched data for reporting. By separating concerns, both requirements are met.

**Separating user, project, and usage data** became essential when privacy teams realised that developers debugging performance issues don’t need to know which specific clients are involved. In contrast, managers reviewing adoption don’t need detailed technical telemetry. The pattern uses separate storage with different access controls. User information (ID, name, location, department) lives in one table accessible only to HR and authorised managers. Project information (code, name, client, location, manager, sector) is stored in another table, accessible to project managers and reporting teams. Usage telemetry (timestamp, user ID reference, project code reference, feature used, complexity code, and performance metrics) resides in a third table, accessible to both development and analytics teams. The library sends usage events with references (user ID, project code) but not complete personal or project data. Reporting queries join tables as needed, but access controls ensure teams only see data they’re authorised to access. A developer sees usage patterns without knowing which users or clients. A manager sees adoption metrics without technical details. This separation-of-concerns architecture satisfies both technical needs and privacy requirements without compromise.

Conclusion

Measuring impact and securing access are the final pieces that transform computational design libraries from development projects into enterprise assets.

Telemetry answers questions that matter to the business: Which workflows drive project success? What features contribute to cost reduction or time savings? Where do errors block productivity and impact delivery? When aligned with stakeholder-defined outcomes and connected to project-level data through CRM integration, telemetry transforms from technical metrics into business evidence demonstrating ROI to leadership, proving impact on actual projects, and continuously improving based on how the library performs against business objectives, not just technical benchmarks.

Authentication ensures proprietary tools remain within authorised boundaries, satisfies IT security requirements, and enables enterprise deployment with confidence. It’s not about creating barriers; it’s about meeting the security standards that allow computational design libraries to be treated as production tools rather than experimental side projects.

What becomes clear throughout this series is that professional-grade library development isn’t a solo developer activity; it’s inherently **multi-functional collaboration**. Computational designers build the functionality, but IT teams provide infrastructure and security guidance. Legal teams define privacy boundaries, while stakeholders determine the business outcomes worth measuring. Project managers connect library usage to project success, and leadership provides the strategic direction that aligns development with organisational goals. Together, these capabilities complete the professional library development lifecycle: structured projects, multi-platform builds, comprehensive testing, version control, automated deployment, usage measurement, and access control. Every piece serves a purpose, and every piece requires cross-functional collaboration.

The AEC industry increasingly recognises computational design as core to project delivery, not a niche specialisation. Professional-grade libraries built with the same discipline as any other enterprise software and developed through true multi-functional teamwork are what enable that transition. Telemetry and authentication are what prove these libraries are ready for prime time, and the collaborative process of implementing them is what transforms computational design teams from isolated specialists into integrated partners in enterprise software delivery.