ICP Blog

Before Your Assets Get Away From You

Jeremy Bright
Posted by
Jeremy Bright

Generative AI has fundamentally changed the pace of content production. Marketing, creative, and product teams have been generating more assets, across more channels, faster than most governance frameworks were ever designed to handle. For many organisations, the opportunity of AI-powered content operations arrived ahead of the infrastructure needed to support it. 

That infrastructure gap has a name: the DAM. Specifically, the question of whether an organisation’s DAM was positioned as a passive repository or as an active orchestration layer in the content supply chain. This distinction mattered more than ever, and the organisations already addressing it have been reaping the rewards in operational efficiency, brand consistency, and AI-powered workflow performance. 

That was the conversation we brought to Henry Stewart DAM LA 2026. On March 18, we joined a panel session facilitated by Kathleen Cameron at Google, exploring DAM’s critical and evolving role as it integrates with AI-driven technology. These were the three major talking points we discussed on the day.

 

The Stakes

A recent Forrester Consulting study found that 80% of enterprises planned to increase DAM investment over the next two years, with AI integration cited as the primary driver. At the same time, the State of AI in DAM 2025 report (drawing on insights from over 250 digital asset professionals) found that just 26% of respondents felt their AI investments were fully delivering expected results. 

The gap between investment and return was rarely a technology problem. It was a governance and integration problem built around unrealistic expectations. When AI tools generated content faster than taxonomy strategies, version control workflows, and approval processes could keep pace, the result was an asset library that grew too quickly and became harder to trust. 

The biggest opportunity for DAM leaders was to close that gap with knowledge and vision- before the complexity compounded. 

The gap between AI investment and AI return was rarely a technology problem. It was a governance and integration problem.

 

Data Governance and the Metadata Foundation

AI-generated content raised a question that was rarely asked in the era of human-produced assets: how do you know where it came from, who approved it, and how it can be used? Metadata had always been the answer, but the standards required now went significantly further than keyword tags and rights fields. 

The C2PA standard, developed through a coalition that includes Google, Adobe, and Microsoft, provided a framework for cryptographically signed content credentials that travelled with an asset to declare its origin, production method, and edit history. This became increasingly relevant for organizations whose creative workflows touched GenAI tools, with platforms including YouTube and Google’s ad systems already surfacing and responding to C2PA signals. 

For DAM practitioners, the operational priority was straightforward: audit existing metadata schemas against what GenAI tools were producing, identify where provenance data was at risk of being lost (particularly at ingest by a GenAI tool, social platform upload, CMS ingestion, and external agency handoff), and ensure the DAM was designed to preserve and surface that information. 

Data governance in the AI era was not a DAM Manager's responsibility alone. It was an enterprise content operations imperative.

 

Asset Provenance and the Watermarking Layer

Metadata based provenance had a well documented limitation: most social and distribution platforms stripped it on upload. An asset could leave the DAM carrying a complete and valid content credential and arrive at its destination with none of that information intact. This was where forensic watermarking became essential to a complete content authentication strategy. 

Invisible, AI resistant signals embedded directly into pixels or audio waveforms persisted through compression, transcoding, and social sharing in ways metadata could not. Google’s SynthID- extended across images, text, audio, and video-operated on this principle. Meta’s Video Seal, released as open source in late 2024, followed the same approach. 

Together with C2PA, these technologies formed a layered provenance architecture that gave organisations meaningful asset tracking capability even when content had travelled far from its origin. 

The practical integration question for DAM teams was determining where watermarking belonged in the workflow. Applying it at the point of approval before assets left the DAM created a consistent audit trail without adding friction. 

Applying forensic watermarking at the point of approval created a consistent audit trail without adding friction to creative operations.

 

Integration Strategy and DAM as Orchestration Hub

The most significant strategic shift in DAM at this moment was the move from repository to orchestration layer. For teams operating at AIdriven speed, a DAM that waited to be asked for assets could not support the real-time demands of personalisation engines, automated publishing workflows, or GenAI tools needing approved creative as reference material. 

The push-pull architecture question crystallised this. Pull models worked well at human-managed scale. Push models, where the DAM actively delivered assets to connected systems based on rules and triggers, were what enabled genuine content operations automation and personalisation at scale. 

Organisations seeing the strongest DAM ROI were those that had mapped every upstream and downstream handoff and made deliberate integration choices at each point. 

This went beyond technology. How assets were structured, tagged, versioned, and catalogued in the DAM directly determined how well AI tools could discover and work with them. For teams building toward agentic content workflows - where AI took autonomous actions across the system - the DAM’s access controls and governance framework needed to accommodate non-human actors as full workflow participants. 

Taxonomy governance and integration architecture were two sides of the same coin, and both deserved attention before AI adoption at scale, not after. 

 

Join the Conversation

If your team is working through any of these challenges (whether building a metadata governance framework, rethinking DAM integration architecture, or preparing for AI-scale content operations), we would still welcome the conversation. Reach out if you would like to explore any of the ideas discussed during the session or submit follow-up questions. 

The organisations leading in AI-powered content operations will continue to be the ones with the strongest governance foundations beneath them. There is real value to be unlocked by getting this right, and the time to build it remains now.

 

Sources Referenced

This blog draws on the following sources for factual claims and statistics: 
  • Forrester Consulting / Orange Logic: New Independent Study on DAM Investment (January 2026) 
  • WoodWing: The State of AI in DAM 2025 Report (Kristina Huddart, 250+ professionals surveyed) 
  • Aprimo: 2025 Top DAM Trends; IDC data on AI/GenAI prioritization in DAM 
  • C2PA Specification v2.2 Explainer (April 2025) -- Coalition for Content Provenance and Authenticity 
  • Google Blog: How Google and the C2PA Are Increasing Transparency for Gen AI Content 
  • Sima Labs: C2PA vs SynthID vs Meta Video Seal -- 2025 Enterprise AI-Video Authenticity Playbook 
  • NSA Cybersecurity Information Sheet: Strengthening Multimedia Integrity in the Generative AI Era (January 2025) 
  • Henry Stewart Conferences: DAM Los Angeles 2026