Automating Content Production for Agility and Velocity

Last week, Adobe announced new Adobe Experience Manager capabilities that help enable more agile content teams and content velocity. This update brings Creative Cloud functionality directly into Adobe Experience Manager, along with some promising automated tagging updates. Adobe users and creators must be excited about this improvement as these new capabilities lead to a few key opportunities, including:
  • The ability to rapidly scale content production which is a requirement for all the contextual and personalized content needs in today’s digital experience environment.Agility leading to greater content velocity, which is the ability to put out vastly more content (compared to your internal benchmarks as well as your competition).
  • The two key Adobe Experience Manager innovations available in 2021 are described by Adobe as:
    • “Creative Cloud-powered content automation capabilities in Adobe Experience Manager Assets as a Cloud Service”
    • “AI-powered digital asset management” which includes:
      • Automatically applying Color Tags to all product imagery
      • Automated keyword/text extraction and tagging of Assets
You can read the full announcement from Adobe here.

Challenges and Risks of Content Automation

Before you launch into using these new enhancements, there is something your organization needs to consider. Relying on an AI/ML system (in this case Adobe Sensei), has some large risks. In fact, the same day this Adobe announcement was published, VentureBeat also published an incredibly relevant article that covers one of the major risk factors: data quality. In this article, “Is poor data quality undermining your marketing AI?”, the author, Louis Columnus, goes on to emphasize key issues, including:
  • “The most common reason AI and ML fail in the marketing sector is that there’s little consistency to the data across all campaigns and strategies. Every campaign, initiative, and program has its unique meta-tags, taxonomies, and data structures.”
  • “Creating greater consistency across taxonomies, data structures, data field definitions, and meta-tags would give marketing data scientists a higher probability of succeeding with their ML models at scale.”
  • “Instead of asking data scientists to solve the marketing department’s data quality challenges, it would be far better to have the marketing department focus on creating a single, unified content data model.”
We highly recommend you read the full article to understand the scope of what’s at stake not just for these changes to Adobe Experience manager but general marketing data quality issues negatively impacting your organization’s ability to use AI/ML effectively (if at all).

The Impact of Data Quality on Adobe Experience Manager

As this relates to these innovations forthcoming in Adobe Experience Manager, if your organization is scaling content velocity via new automation, you risk losing control if you don’t trust the decisions your AI tagging is processing. If your content producers aren’t aligned on naming conventions and standards upfront and proactively, a file may flow into your DAM that is named incorrectly, then re-used at scale with the wrong name intact – propagating hundreds (or more) of additional assets that then also get wrongly tagged by your AI/ML system. At best, these mistakes lead to a huge clean-up project for someone. At worst, your business is delivering poor experiences to your customers. And we all know that a great digital experience is paramount today. In fact, “84% of customers say the experience a company provides is as important as its products or services” and “1 in 3 consumers will walk away from a brand they love after just one bad experience”. (Fullstory)

Delay in Content Velocity

Let’s assume Sensei works great (though as noted in VentureBeat above it’s certainly not going to be perfect). Your content velocity overall will still remain slow due to the many manual steps still required to build the content and define other asset attributes which are managed through workflows or content authors.  The automation proposed in this latest update is only effective after the content is created and data standards are applied throughout the workflow. Then you can maximize content velocity.

How to Solve

Organizations need to get their data standards and integrity finally solved. They can no longer rely on antiquated and slow systems that are simply reactive, “clean-up” methodologies. Data has to be right at its point of origin and that starts with a unified data model. And that also can’t just be a conceptual diagram/plan that people only philosophically agree upon. It has to be something that is implemented and applied globally, accessible across internal teams and external agencies – anywhere that data is being created (which is pretty much everywhere in an organization now). This “unified data model” is one of the main areas we are solving here at Claravine. We talk about the importance of a unified data model here and provide an excellent outline of building this data model, or taxonomy, here. In fact, our recent launch of The Data Standards CloudTM  addresses the lack of data quality being fed into AI/ML systems (amongst other core systems in your businesses technology stack). Our technology lets teams and organizations manage their data standards and create data integrity globally, providing consistent and quality information to optimize business outcomes. Learn more about how Claravine is solving these challenges for teams at the largest companies in the world.

Related Content