Testing the libraries
Before starting, if you don’t know Braden, you should check out his newsletter. He shares insightful articles on computational development in the AEC space. As with the previous article, Braden continues to collaborate on this series, contributing his sharp insights and experience throughout.
After establishing our project structure, build configurations, and testing strategies in the previous articles, we now face the most critical question: how do we actually get these carefully crafted packages into the hands of users? And more importantly, how do we achieve this consistency across multiple software versions without being overwhelmed by the complexity of deployment?
In the AEC world, deployment isn’t just about pushing code to a server. We’re dealing with Dynamo packages that need to land in the correct folders for Revit 2022, 2023, 2024, and 2025, as well as Grasshopper components packaged as Yak files for Rhino 7 and 8. Additionally, we are increasingly supporting direct deployment to package management systems. Each platform has its own quirks, specific folder structures, and metadata requirements.
The traditional approach? Write increasingly complex GitHub Actions workflows. YAML upon YAML, each one trying to orchestrate builds, tests, packaging, and deployment. It works—and with well-structured composite actions and reusable workflows, it can be quite maintainable for straightforward scenarios. But as complexity grows, particularly when dealing with multiple platforms, conditional logic, and complex integrations, the limitations become apparent. Testing locally is difficult (tools like act
exist but have limitations), debugging often requires workflow_dispatch runs or careful log analysis, and the learning curve for YAML’s conditional syntax and GitHub Actions expressions can be steep.
This is where NUKE offers a different paradigm. Instead of learning GitHub Actions’ domain-specific language, you write build automation in C#, a language you already know, with full IDE support, debugging capabilities, and compile-time type checking. Your build scripts become first-class citizens in your codebase, testable and maintainable just like any other code.
The typical scenario looks like this
You need to build your library for six different configurations (Revit 22-25, Civil3D 22-25), package them correctly for Dynamo’s folder structure, create Grasshopper Yak packages, distribute them through your chosen channels, and maybe push directly to a package manager. Oh, and you’d like to test this process locally before committing.
In pure GitHub Actions, this becomes a sprawling YAML file with repeated patterns, complex matrix strategies, and conditional logic that can be hard to follow, especially for developers not deeply familiar with GitHub Actions syntax. Testing locally means either using tools like act
(which approximates but doesn’t perfectly replicate the CI environment) or, more commonly, using workflow_dispatch to trigger test runs in the actual CI environment.
The challenges compound:
- Repetition: Even with reusable workflows, similar logic often appears across multiple workflow steps
- Limited debugging: While GitHub provides debugging logs and you can use workflow_dispatch for testing, stepping through pipeline logic with breakpoints isn’t possible
- Testing difficulty: Running the whole deployment process on your local machine requires additional tooling and setup
- Complex conditionals: GitHub Actions expression syntax and conditional logic become unwieldy for complex scenarios
- Learning curve: Mastering GitHub Actions workflows, composite actions, and expressions takes time
The reality is that YAML excels at configuration, but once your deployment logic reaches a certain complexity, especially when integrating with multiple external APIs or implementing sophisticated conditional logic, having a full programming language available becomes valuable.
Branching strategies
NUKE is a build automation system that lets you define your entire build, test, and deployment pipeline in C#. Think of it as the conductor of your deployment orchestra, coordinating all the different instruments (MSBuild, package managers, file operations, API calls) through code you can actually understand, debug, and test.
The beauty of NUKE is that it doesn’t replace your CI/CD platform; it enhances it. GitHub Actions still orchestrates when things run (on PR, on merge, on release), but NUKE handles the how. Your workflow file is essentially reduced to: “checkout code, run NUKE with these parameters.” The shift looks like this:
Before (YAML-heavy):
- name: Build Dynamo for Revit 2022
run: dotnet build --configuration ReleaseR22
- name: Build Dynamo for Revit 2023
run: dotnet build --configuration ReleaseR23
- name: Package Dynamo R22
run: |
mkdir packages
copy files...
zip package...
- name: Upload packages
run: complex PowerShell script...
# ...repeat for each version...
After (NUKE-powered)
- name: Run NUKE
run: ./build.cmd --Version ${{ steps.release.outputs.tag_name }} --Configuration "Release" --OperationType "BulkAll"
All the complexity is encapsulated in C# code, where you have proper tooling, debugging, and the ability to test locally. The trade-off is that your team now needs to understand NUKE’s API and conventions—but for teams already working in C#, this learning curve is often gentler than mastering GitHub Actions’ expression language and workflow syntax.
The anatomy of a NUKE build
NUKE organises work into Targets: discrete units of work that can depend on each other, execute conditionally, and trigger subsequent targets. Think of them as methods with built-in dependency management: they understand what needs to run before them, can decide whether to execute based on conditions, and can automatically trigger follow-up work.
Here’s the conceptual flow of a NUKE build:
partial class Build : NukeBuild
{
// Parameters can be passed from command line or CI environment
[Parameter("The build version")]
string Version;
[Parameter("Grasshopper | Dynamo | BulkAll | Test")]
string OperationType;
// Automatically detects if running locally vs. in CI
Configuration Configuration = IsLocalBuild ? Configuration.Debug : Configuration.Release;
// Runs once before any targets execute
protected override void OnBuildInitialized()
{
// Initialize version, load secrets, set up services
_version = Helper.GetLongVersion(Version);
if (IsLocalBuild)
{
// Load from user secrets for local development
_credentials = LoadFromUserSecrets();
}
else
{
// Load from environment variables in CI
_credentials = LoadFromEnvironmentVariables();
}
}
}
The OnBuildInitialized
The method runs before any targets execute, providing a perfect opportunity to set up connections, load credentials (from user secrets locally or environment variables in CI), and prepare shared resources. This dual approach means developers can test the full deployment pipeline on their machines without exposing sensitive credentials in code.
Building for multiple platforms
The real power of NUKE becomes apparent when orchestrating builds across multiple platforms. Instead of writing separate workflow steps for each configuration, you write intelligent C# logic that adapts to your needs.
Compilation targets
A compilation target leverages the conditional build system we established in earlier articles:
Target CompileDynamo => _ => _
.DependsOn(Restore) // Run after NuGet restore completes
.OnlyWhenStatic(() => OperationType == "Dynamo" || OperationType == "BulkAll") // Conditional execution
.Triggers(PackageDynamo, DistributeDynamo) // Automatically run these next
.Executes(() =>
{
// Get all Release configurations for Dynamo (e.g., ReleaseR22, ReleaseC23)
var configurations = SolutionConfigurations
.Where(config => Regex.IsMatch(config, @"^Release[RC].*"))
.ToList();
foreach (var configuration in configurations)
{
DotNetBuild(settings => settings
.SetConfiguration(configuration)
.SetProperty("AssemblyInformationalVersion", _version)
.SetAssemblyVersion(_version));
}
});
Breaking down the Target syntax (which can look cryptic at first):
Target CompileDynamo => _ => _
: Defines a target named CompileDynamo. The_ => _
is NUKE’s fluent syntax for target definition..DependsOn(Restore)
: This target won’t run until the Restore target completes successfully. NUKE manages the dependency graph automatically..OnlyWhenStatic(...)
: Conditionally executes based on the operation type parameter. If the condition isn’t met, NUKE skips this target..Triggers(...)
: After this target completes successfully, automatically run PackageDynamo and DistributeDynamo targets..Executes(...)
: The actual work to perform—in this case, building multiple configurations.
Because we leverage the build.props
and build.targets
files from earlier articles, each configuration (ReleaseR22, ReleaseC23, etc.) automatically uses the correct framework, references, and output paths. NUKE just needs to loop through them and invoke the build.
Grasshopper and Yak packages
Grasshopper deployment introduces Yak packaging; Rhino’s package manager format. The process downloads the Yak CLI tool, prepares the package directory with all necessary files, generates a manifest dynamically with the correct version, and invokes Yak to create the distributable package.
The beauty of doing this in NUKE is that you can test it locally, debug issues immediately, and iterate quickly without pushing to CI each time. The Yak manifest generation is particularly elegant—instead of maintaining a separate YAML file that can get out of sync with your version, you generate it dynamically:
private void CreateYakManifest()
{
string manifestPath = Rhino8YakDir / "manifest.yml";
string yamlContent = $@"
name: YourPackageName
version: {_version}
authors:
- Your Organization
description: A computational design library for Grasshopper.
url: ""<https://your-documentation-site.com/>""
icon: PackageIcon.png
";
File.WriteAllText(manifestPath, yamlContent);
}
Now your Yak package version is always synchronised with your release version—no manual updates, no risk of forgetting to update the manifest.

Distribution strategies: Where should packages go?
Once your packages are built, you need to get them to users. This is where your organisation’s infrastructure and distribution strategy come into play. NUKE’s flexibility enables you to implement the distribution approach that best suits your context.
Traditional distribution approaches
GitHub Releases: The simplest approach. After building, create a zip of all packages and attach it to a GitHub release. Users can download the software manually or through custom package managers that point to GitHub releases. This works well for open-source projects or smaller teams, requires minimal infrastructure, and gives users full control over when to update.
Internal file shares or SharePoint: Many organisations already use SharePoint or network file shares for internal tools. NUKE can upload packages to specific folders organised by software version (Revit 2024, Rhino 8, etc.). The advantage is leveraging existing infrastructure without standing up new systems. Users access packages through mapped network drives or SharePoint document libraries.
Cloud storage (Azure Blob, AWS S3): For more sophisticated setups, packages can be pushed to cloud storage. This approach scales well, provides CDN distribution, and integrates with custom package managers. Use Azure Storage for Dynamo packages with a custom Dynamo package server pointing to those blob containers, or S3 buckets with signed URLs for access control.
Rhino Package Manager (Yak): For Grasshopper components, the official Rhino package manager is an option. Publishing to the public Yak server makes your packages discoverable to all Rhino users worldwide. For internal tools, you might host a private Yak server that mirrors the public one but serves your proprietary packages.
You can even put Yak packaged on network drives and have Rhino’s package manager point to that. It will then ensure you have a master copy while your users get the packaged copy. This is also useful for storing Grasshopper scripts.
Custom Dynamo package servers: Similar to Rhino, you can host a custom Dynamo package manager that appears in Dynamo’s package manager UI. Users see your packages alongside public packages, but you maintain full control over access and distribution.
The key insight is that NUKE doesn’t lock you into any specific distribution method. Your deployment targets can upload to multiple destinations simultaneously or conditionally, based on the environment (e.g., dev builds to one location, production to another).
Direct deployment to Orkestra
While traditional distribution methods work, they share a standard limitation: they’re pull-based. Users must actively download and install packages, leading to version fragmentation across teams. Some users have the latest version, others are months behind, and debugging issues becomes a nightmare when you don’t know what version someone is running.
This is where Orkestra fundamentally changes the distribution model. Orkestra is a package management and deployment system designed for AEC computational design tools. It enables push-based distribution directly to workspaces and hubs, organisational units that group users and projects. Instead of users pulling packages when they remember to check for updates, packages are automatically synchronised to configured workstations. It’s the difference between telling everyone to go download the latest version and having the latest version appear automatically on their machines.
Why Orkestra matters
The traditional workflow looks like this:
- You release a new version
- You send an email announcement
- Users (eventually) check the package manager
- Users manually update
- Half the team is on version 1.2, the other half on 1.5, someone’s still on 1.0
The Orkestra workflow:
- You release a new version
- NUKE pushes directly to Orkestra hubs
- Orkestra synchronises packages to all configured workstations
- Everyone has the same version automatically
- You can require specific versions for specific projects
This push-based model is especially valuable in AEC environments where project teams span months or years. When someone opens a project from three months ago, you want confidence that their computational tools match the project’s requirements, not just that they manually updated it at some point.
The Orkestra integration architecture
Integrating with Orkestra in NUKE showcases why C# is valuable for complex integrations. You’re interacting with a REST API that requires authentication, handles presigned upload URLs (temporary, secure URLs for uploading files to cloud storage without permanent credentials), manages chunked file uploads, and coordinates package metadata refresh. Implementing this logic in shell scripts would be fragile and challenging to debug.
The high-level flow works like this:
Step 1: Request an upload URL: Your NUKE build contacts Orkestra’s REST API with authentication (using a bearer token—a secure credential passed in the HTTP request header), specifies the destination (hub ID and workspace ID—unique identifiers for where the package should go), and receives a presigned S3-compatible upload URL. This temporary URL is suitable for a single upload, providing security without requiring the management of permanent credentials to cloud storage.
Step 2: Upload the package: Using the presigned URL, you can upload your zipped package directly to Orkestra’s cloud storage. (S3-compatible means it uses Amazon S3’s API format, which is supported by many cloud storage systems.) The upload occurs in chunks with progress tracking, enabling the transfer of large packages and facilitating the graceful handling of network issues.
Step 3: Notify Orkestra: After the upload completes, you call back to Orkestra’s API to refresh package metadata. This tells Orkestra to process the new package, update version information, and mark it as available for distribution to workstations.
The beauty of implementing this in C# is comprehensive error handling, retry logic, detailed logging, and testability. You can set breakpoints, inspect API responses, simulate failures, and verify the entire flow locally before ever pushing to CI.
Implementing the Orkestra uploader
Your NUKE build includes a PackageUploader
class that encapsulates all Orkestra interaction. This class handles authentication with bearer tokens (the secure credentials mentioned earlier), manages HTTP client configuration (setting up the web requests), coordinates the three-step upload process, and provides clear logging at each stage.
The key benefit is abstraction. Your deployment targets don’t need to know the details of Orkestra’s API—they just call UploadPackageAsync
with the source folder and destination path. The uploader handles everything else. If Orkestra’s API changes, you update one class and test it locally, rather than hunting through workflow files trying to figure out which commands need adjustment.
For Dynamo packages, you might push different versions to different workspaces. Your Revit 2025 packages are assigned to workspaces running the latest version of Revit, while Revit 2023 packages are assigned to legacy project workspaces. NUKE can loop through multiple destinations and handle conditional logic based on software versions.
For Grasshopper packages, Orkestra handles Rhino version targeting and resource types. Your NUKE target specifies the Rhino version and resource type, and Orkestra ensures that packages are delivered to the correct locations on user machines.
Testing Orkestra integration locally
This is where NUKE really proves its value. Before pushing anything to production, you can:
# Test Orkestra upload to a development workspace
./build.cmd --Configuration Debug --OperationType BulkAll --Version 1.0.0-dev
Your OnBuildInitialized
method loads Orkestra credentials from user secrets (stored securely on your development machine, not in code), connects to a dev workspace (not production), and executes the exact same upload logic that runs in CI. You can set breakpoints in the PackageUploader
class, inspect API responses, verify package structure, and confirm everything works before cutting a release.
While GitHub Actions does offer debugging tools and workflow_dispatch for testing, having full local execution with breakpoint debugging provides a different level of development experience—especially valuable when troubleshooting complex API integrations.
Creating GitHub release artifacts
Regardless of your primary distribution method (Orkestra, SharePoint, cloud storage), you’ll likely want GitHub release artifacts as a backup and historical record. NUKE makes this straightforward with the Octokit library (a .NET library for interacting with GitHub’s API).
After all compilation and packaging are complete, the target creates a comprehensive zip of everything and uploads it to the GitHub release. This happens automatically when your release workflow runs, requires no manual intervention, and provides a permanent record of exactly what was released at each version.
The practical benefit: if your primary distribution system has issues, users can always fall back to downloading from GitHub releases. If someone needs an old version for a legacy project, it’s available in your release history. If you need to investigate what changed between versions, you can download and compare the artifacts.
Integrating with GitHub Actions
After building all this NUKE orchestration, your GitHub Actions workflow becomes remarkably simple:
name: Release
on:
pull_request:
types: [closed]
branches: [develop, main]
jobs:
build:
if: github.event.pull_request.merged == true
runs-on: windows-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Create release
uses: release-drafter/release-drafter@v6
id: release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Run NUKE
run: ./build.cmd --Version ${{ steps.release.outputs.tag_name }} --Configuration "Release" --OperationType "BulkAll"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
ORKESTRA_TOKEN: ${{ secrets.ORKESTRA_TOKEN }}
That’s it. The entire deployment pipeline—building for multiple platforms, packaging, distributing to your chosen channels, creating GitHub release artifacts—is orchestrated by a single NUKE command with a few parameters.
Notice how all the secrets (Orkestra token, SharePoint credentials, cloud storage keys) are passed as environment variables. NUKE picks these up automatically in the OnBuildInitialized
method we saw earlier. Your workflow file doesn’t need to know anything about how distribution works—that complexity lives in C#, where it belongs.
Local testing: A key advantage
Here’s where NUKE offers a distinct advantage compared to pure CI/CD workflows: you can run the exact same deployment process locally that runs in CI. While tools like act
exist for locally running GitHub Actions, NUKE’s approach provides native local execution with full debugging capabilities.
# Test full deployment pipeline locally
./build.cmd --Configuration Debug --OperationType BulkAll --Version 1.0.0-local
Because OnBuildInitialized
checks IsLocalBuild
, it loads credentials from user secrets instead of environment variables, connects to development destinations instead of production, and adds extra logging for debugging. You can test uploads to Orkestra, SharePoint operations, cloud storage integration—everything—without committing any changes to GitHub.
The productivity multiplier
The real value of NUKE becomes apparent over time. When you add support for a new Revit version, you update your build.targets
file (from the first article) with the new configuration, and NUKE automatically picks it up. When your distribution API changes, you update the client class and test it locally before deploying.
Every improvement to your build process is written in C#, versioned in Git, reviewed in pull requests, and tested before deployment. Your deployment infrastructure becomes as maintainable as your application code.
For teams managing multiple Dynamo and Grasshopper libraries, NUKE provides a template you can replicate across projects. The pattern established once, NUKE orchestration, multi-platform builds, flexible distribution, and automated releases scale to as many libraries as you maintain.
Conclusion
Moving from pure GitHub Actions YAML to NUKE-orchestrated deployment represents a shift in how you approach CI/CD. It’s not that GitHub Actions can’t handle complex deployments—with composite actions, reusable workflows, and careful design, it certainly can. Instead, NUKE offers a different development experience: instead of learning GitHub Actions’ expression syntax and workflow DSL, you work in C# with full IDE support, local debugging, and the ability to test deployment logic as rigorously as you test application code.
For Dynamo and Grasshopper libraries serving AEC professionals, reliability is crucial. These aren’t consumer apps with rapid iteration and forgiving users; they’re tools that teams depend on for project delivery. When you release a new version, you need confidence that it’s built correctly, appropriately packaged, and distributed to all the right places.
But deployment is about more than build automation; it’s how your library reaches users. The best code has limited impact if users can’t discover it, don’t know which version to install for their software, or struggle with the installation process. This is where the deployment strategy has a direct impact on library adoption.
Professional deployment solves these problems systematically:
- Discoverability: Publish to Orkestra hubs, Yak package manager, or custom Dynamo servers where users already look
- Version clarity: Automatically route the right build to the right users—.NET 8 for Revit 2025, .NET 4.8 for Revit 2023
- Synchronised documentation: Deploy docs alongside code so users always see current installation instructions and compatibility notes
- Simplified support: Push-based distribution means you know exactly which version users have when they report issues
Deployment is the bridge between development effort and user impact. Systematic versioning, multi-channel distribution, and synchronised documentation transform libraries from code artifacts into usable tools that teams can depend on, regardless of whether they build with NUKE, GitHub Actions, or other automation tools.