DTM and the Case for Direct Call Rules
Cut to the Chase
If you’re here to learn how to implement Direct Call Rules in Adobe’s Dynamic Tag Manager (DTM), you should jump to the end. There’s almost no documentation anywhere else online showing you how to set up Direct Calls, and even less information about why you would even want to. But based on the experience of my last week, I’m unlikely to ever use anything else. So before I explain how, I want to focus on the why. This post is the story of my conversion, and the light-bulb moment (or rather, lightning-bolt) which convinced me to switch from Event-Driven Rules to Direct Call Rules.
My first real exposure to DTM came this past March at the Adobe Analytics conference in Salt Lake City. In a roomful of non-developers, the presenters’ message resonated powerfully: DTM would set us free. Perhaps nothing that dramatic was actually said, but I do remember after each example that they wrapped up with the same phrase: “and see how I did it without writing a single line of code.” The clear message that I took away was that I’ve been beholden to the code-writers for too long, and DTM offered me the mechanism to take control of the analytics capture across my site.
So why do you suppose that I woke up in a sudden panic this past Saturday morning? That’s what this post is about: how in one instant of clarity I came to see both the impending disaster before me, and the original lie that put me on that path, that siren’s voice that drew me out of the gate in exactly the wrong direction. It’s also about the discovery of Direct Call Rules, how they saved the day, and why they will be the only type of rule I build, now and in the future.
The Dark Underbelly of Event-Based Rules
I had been suppressing a nagging concern for some time—Saturday morning was when it finally broke through into my consciousness. I realized that, with every new event-based rule I saved, I was in essence placing another obstacle in the way of the developers whose job it would be to maintain and improve our website’s architecture. A quick web search turned up Josh West’s post from last November, which confirmed my worries with a vivid example. He illustrated a scenario where, with one smart upgrade after another, a right-thinking developer could compromise, corrupt, and eventually completely destroy the tracking of key site interactions, simply by doing her job well. Jason Thompson echoed the sentiment, in effect saying that just because we can do something, doesn’t mean that we should—at least that’s what I drew out of his last paragraph. After all, we don’t just want to build something that works; we want to build something that will work for years to come.
At this point, short weeks before our deadline, I saw that I had applied about as much foresight as a sand-castle builder at low tide, or someone setting up an open-air meditation clinic right next to Old Faithful. As I became faster and more effective at laying out listeners and bypassing the developers, I was casting one fragile web after another across our site. I like the image of a spider’s web, because they’re practically invisible, and people don’t notice them until they’ve walked through and destroyed them. The development team would have no warning of what shouldn’t be changed on the site, since the logic I was writing lived in DTM rather than the site’s codebase.
The Lie and the Truth about Tag Management
I’ve hit it pretty hard already, but I’ll lay it out here one last time: the value of a Tag Management System (TMS) such as DTM is not in its capacity to let you work independently of the development team. Just the opposite: the most important advantage of any TMS is that it facilitates an active and necessary collaboration between the analysts, who know best what to do with data elements, and the developers, who know best how to deliver them. There are details within analytics implementations that site developers shouldn’t have to know. DTM pulls the elaborate mapping between data points and analytics reports off of their plate, and gives analysts the freedom to reconfigure that mapping as the need arises.
Here’s an illustration of this benefit, based on a code emergency a team of developers in New York brought to me last week. I took this screenshot during the meeting.
It’s no wonder they were having trouble. These switch cases are chock-full of Omniture’s home-grown lingo: s-props and evars and functions, with no clues about how this code might be extended for the next case that comes along.
DTM allows the embedded code to look like this:
Here is code that anyone can read and follow, where only the high-level, clear intentions of the analytics efforts are included in the site’s code. All of the complex variable mapping is handled within the DTM interface. Not only does this make it less vulnerable to the whims of uninitiated developers who may feel tempted to copy and paste code from one area to another, or delete values that they have never heard of, but it also allows companies to switch from one analytics vendor to another (such as Adobe Analytics to Google Analytics, or vice versa) without having to uproot anything in the code.
Data Layer is King, Data Element is Tom
I’m not alone when I preach that the best TMS implementations are built upon a solid Data Layer foundation. Don’t skip it. Both Josh and Jason emphasized its need in the blogs I referenced above, and Adobe’s evangelists are leading with that same message: see what Jeff Chasin and Adam Klintworth said about it.
Actually, there is one a terrific reason to always use Data Elements, but it’s complicated, and will have to be the essence of the next DTM blog post I write. In a nutshell, it provides a way to reverse-engineer the entire Data Layer object’s structure, and from there clear out values that were inadvertently being carried from one analytics call to the next.
Direct Call Rules
This is the part where my advice leaves the most common paths. I believe that data should be pushed to DTM, not pulled via DTM’s listeners. But this approach is certainly not being advanced as a best practice on the DTM scene. Notice the paltry two sentences of documentation devoted to Direct Call Rules on the entire DTM Help site, and the link that shows a picture with almost no explanation.
A fellow member on an Adobe help forum pointed me to this video where I learned how to apply Direct Call Rules, but even there the implication was that this option was the last resort, something you could fall back on when nothing else worked.
That’s where I disagree. One last time: the CSS layer is an unreliable foundation for triggering data collection. Your TMS system will not warn developers of tracking components that will break if they restructure this or that element in the code. A Direct Call strategy resolves both of these issues, ensuring that the call is made when it’s supposed to fire, and standing as a reminder to the coding team the analytics system is doing something intentional at that moment.
So let me end with a quick tutorial.
Step 1: Define the Data Layer Requirements.
Step 2: Make sure a Data Element exists for each Data Layer property.
Step 3: Create a New Direct Call Rule. The Name you give it isn’t as important as the String you type in the Condition section.
Step 4: Make that string the argument in the _satellite.track function, to be delivered with the Data Layer Requirements.
Step 5: Test it on your own localhost. (I’m a stickler on this point—it’s not that hard; you can learn to do it!)
Here are some screenshots that will help you see how it all looks when it’s all put together: