I am a creative.

I am a creative. What I do is alchemy. It is a mystery. I do not so much do it, as let it be done through me.

I am a creative. Not all creative people like this label. Not all see themselves this way. Some creative people see science in what they do. That is their truth, and I respect it. Maybe I even envy them, a little. But my process is different—my being is different.

Apologizing and qualifying in advance is a distraction. That’s what my brain does to sabotage me. I set it aside for now. I can come back later to apologize and qualify. After I’ve said what I came to say. Which is hard enough. 

Except when it is easy and flows like a river of wine.

Sometimes it does come that way. Sometimes what I need to create comes in an instant. I have learned not to say it at that moment, because if you admit that sometimes the idea just comes and it is the best idea and you know it is the best idea, they think you don’t work hard enough.

Sometimes I work and work and work until the idea comes. Sometimes it comes instantly and I don’t tell anyone for three days. Sometimes I’m so excited by the idea that came instantly that I blurt it out, can’t help myself. Like a boy who found a prize in his Cracker Jacks. Sometimes I get away with this. Sometimes other people agree: yes, that is the best idea. Most times they don’t and I regret having  given way to enthusiasm. 

Enthusiasm is best saved for the meeting where it will make a difference. Not the casual get-together that precedes that meeting by two other meetings. Nobody knows why we have all these meetings. We keep saying we’re doing away with them, but then just finding other ways to have them. Sometimes they are even good. But other times they are a distraction from the actual work. The proportion between when meetings are useful, and when they are a pitiful distraction, varies, depending on what you do and where you do it. And who you are and how you do it. Again I digress. I am a creative. That is the theme.

Sometimes many hours of hard and patient work produce something that is barely serviceable. Sometimes I have to accept that and move on to the next project.

Don’t ask about process. I am a creative.

I am a creative. I don’t control my dreams. And I don’t control my best ideas.

I can hammer away, surround myself with facts or images, and sometimes that works. I can go for a walk, and sometimes that works. I can be making dinner and there’s a Eureka having nothing to do with sizzling oil and bubbling pots. Often I know what to do the instant I wake up. And then, almost as often, as I become conscious and part of the world again, the idea that would have saved me turns to vanishing dust in a mindless wind of oblivion. For creativity, I believe, comes from that other world. The one we enter in dreams, and perhaps, before birth and after death. But that’s for poets to wonder, and I am not a poet. I am a creative. And it’s for theologians to mass armies about in their creative world that they insist is real. But that is another digression. And a depressing one. Maybe on a much more important topic than whether I am a creative or not. But still a digression from what I came here to say.

Sometimes the process is avoidance. And agony. You know the cliché about the tortured artist? It’s true, even when the artist (and let’s put that noun in quotes) is trying to write a soft drink jingle, a callback in a tired sitcom, a budget request.

Some people who hate being called creative may be closeted creatives, but that’s between them and their gods. No offense meant. Your truth is true, too. But mine is for me. 

Creatives recognize creatives.

Creatives recognize creatives like queers recognize queers, like real rappers recognize real rappers, like cons know cons. Creatives feel massive respect for creatives. We love, honor, emulate, and practically deify the great ones. To deify any human is, of course, a tragic mistake. We have been warned. We know better. We know people are just people. They squabble, they are lonely, they regret their most important decisions, they are poor and hungry, they can be cruel, they can be just as stupid as we can, because, like us, they are clay. But. But. But they make this amazing thing. They birth something that did not exist before them, and could not exist without them. They are the mothers of ideas. And I suppose, since it’s just lying there, I have to add that they are the mothers of invention. Ba dum bum! OK, that’s done. Continue.

Creatives belittle our own small achievements, because we compare them to those of the great ones. Beautiful animation! Well, I’m no Miyazaki. Now THAT is greatness. That is greatness straight from the mind of God. This half-starved little thing that I made? It more or less fell off the back of the turnip truck. And the turnips weren’t even fresh.

Creatives knows that, at best, they are Salieri. Even the creatives who are Mozart believe that. 

I am a creative. I haven’t worked in advertising in 30 years, but in my nightmares, it’s my former creative directors who judge me. And they are right to do so. I am too lazy, too facile, and when it really counts, my mind goes blank. There is no pill for creative dysfunction.

I am a creative. Every deadline I make is an adventure that makes Indiana Jones look like a pensioner snoring in a deck chair. The longer I remain a creative, the faster I am when I do my work and the longer I brood and walk in circles and stare blankly before I do that work. 

I am still 10 times faster than people who are not creative, or people who have only been creative a short while, or people who have only been professionally creative a short while. It’s just that, before I work 10 times as fast as they do, I spend twice as long as they do putting the work off. I am that confident in my ability to do a great job when I put my mind to it. I am that addicted to the adrenaline rush of postponement. I am still that afraid of the jump.

I am not an artist.

I am a creative. Not an artist. Though I dreamed, as a lad, of someday being that. Some of us belittle our gifts and dislike ourselves because we are not Michelangelos and Warhols. That is narcissism—but at least we aren’t in politics.

I am a creative. Though I believe in reason and science, I decide by intuition and impulse. And live with what follows—the catastrophes as well as the triumphs. 

I am a creative. Every word I’ve said here will annoy other creatives, who see things differently. Ask two creatives a question, get three opinions. Our disagreement, our passion about it, and our commitment to our own truth are, at least to me, the proofs that we are creatives, no matter how we may feel about it.

I am a creative. I lament my lack of taste in the areas about which I know very little, which is to say almost all areas of human knowledge. And I trust my taste above all other things in the areas closest to my heart, or perhaps, more accurately, to my obsessions. Without my obsessions, I would probably have to spend my time looking life in the eye, and almost none of us can do that for long. Not honestly. Not really. Because much in life, if you really look at it, is unbearable.

I am a creative. I believe, as a parent believes, that when I am gone, some small good part of me will carry on in the mind of at least one other person.

Working saves me from worrying about work.

I am a creative. I live in dread of my small gift suddenly going away.

I am a creative. I am too busy making the next thing to spend too much time deeply considering that almost nothing I make will come anywhere near the greatness I comically aspire to.

I am a creative. I believe in the ultimate mystery of process. I believe in it so much, I am even fool enough to publish an essay I dictated into a tiny machine and didn’t take time to review or revise. I won’t do this often, I promise. But I did it just now, because, as afraid as I might be of your seeing through my pitiful gestures toward the beautiful, I was even more afraid of forgetting what I came to say. 

There. I think I’ve said it. 

Opportunities for AI in Accessibility

In reading Joe Dolson’s recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, I’m very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.

I’d like you to consider this a “yes… and” piece to complement Joe’s post. I’m not trying to refute any of what he’s saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, I’m not saying that there aren’t real risks or pressing issues with AI that need to be addressed—there are, and we’ve needed to address them, like, yesterday—but I want to take a little time to talk about what’s possible in hopes that we’ll get there one day.

Alternative text

Joe’s piece spends a lot of time talking about computer-vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer-vision models continue to improve in the quality and richness of detail in their descriptions, their results aren’t great. As he rightly points out, the current state of image analysis is pretty poor—especially for certain image types—in large part because current AI systems examine images in isolation rather than within the contexts that they’re in (which is a consequence of having separate “foundation” models for text analysis and image analysis). Today’s models aren’t trained to distinguish between images that are contextually relevant (that should probably have descriptions) and those that are purely decorative (which might not need a description) either. Still, I still think there’s potential in this space.

As Joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if AI can pop in to offer a starting point for alt text—even if that starting point might be a prompt saying What is this BS? That’s not right at all… Let me try to offer a starting point—I think that’s a win.

Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which images are likely to be decorative and which ones likely require a description. That will help reinforce which contexts call for image descriptions and it’ll improve authors’ efficiency toward making their pages more accessible.

While complex images—like graphs and charts—are challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let’s suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year. (That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, let’s suppose that that was the description that was in place.) If your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users could ask questions like these about the graphic:

  • Do more people use smartphones or feature phones?
  • How many more?
  • Is there a group of people that don’t fall into either of these buckets?
  • How many is that?

Setting aside the realities of large language model (LLM) hallucinations—where a model just makes up plausible-sounding “facts”—for a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts.

Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these tools’ chat-based interfaces and our existing ability to manipulate images in today’s AI tools, that seems like a possibility.

Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart (or better yet, a series of pie charts) into more accessible (and useful) formats, like spreadsheets. That would be amazing!

Matching algorithms

Safiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that it’s equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether it’s Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, there’s real potential for algorithm development to help people with disabilities.

Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things.

When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. That’s why diverse teams are so important.

Imagine that a social media company’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or aren’t white or aren’t male who also talk about AI. If you took its recommendations, perhaps you’d get a more holistic and nuanced understanding of what’s happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward) those groups.

Other ways that AI can helps people with disabilities

If I weren’t trying to put this together between other tasks, I’m sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but I’m going to make this last section into a bit of a lightning round. In no particular order:

  • Voice preservation. You may have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. It’s possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS (Lou Gehrig’s disease) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so it’s something that we need to approach responsibly, but the tech has truly transformative potential.
  • Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinson’s and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice.
  • Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text that’s prepped for Bionic Reading.

The importance of diverse teams and data

We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiences—with all their complexities (and joys and pain)—are valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes.

Want a model that doesn’t demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities that’s authored by people with a range of disabilities, and make sure that that’s well represented in the training data.

Want a model that doesn’t use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models won’t be replacing human copy editors anytime soon. 

Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible.


I have no doubt that AI can and will harm people… today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.


Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt.

The Wax and the Wane of the Web

I offer a single bit of advice to friends and family when they become new parents: When you start to think that you’ve got everything figured out, everything will change. Just as you start to get the hang of feedings, diapers, and regular naps, it’s time for solid food, potty training, and overnight sleeping. When you figure those out, it’s time for preschool and rare naps. The cycle goes on and on.

The same applies for those of us working in design and development these days. Having worked on the web for almost three decades at this point, I’ve seen the regular wax and wane of ideas, techniques, and technologies. Each time that we as developers and designers get into a regular rhythm, some new idea or technology comes along to shake things up and remake our world.

How we got here

I built my first website in the mid-’90s. Design and development on the web back then was a free-for-all, with few established norms. For any layout aside from a single column, we used table elements, often with empty cells containing a single pixel spacer GIF to add empty space. We styled text with numerous font tags, nesting the tags every time we wanted to vary the font style. And we had only three or four typefaces to choose from: Arial, Courier, or Times New Roman. When Verdana and Georgia came out in 1996, we rejoiced because our options had nearly doubled. The only safe colors to choose from were the 216 “web safe” colors known to work across platforms. The few interactive elements (like contact forms, guest books, and counters) were mostly powered by CGI scripts (predominantly written in Perl at the time). Achieving any kind of unique look involved a pile of hacks all the way down. Interaction was often limited to specific pages in a site.

The birth of web standards

At the turn of the century, a new cycle started. Crufty code littered with table layouts and font tags waned, and a push for web standards waxed. Newer technologies like CSS got more widespread adoption by browsers makers, developers, and designers. This shift toward standards didn’t happen accidentally or overnight. It took active engagement between the W3C and browser vendors and heavy evangelism from folks like the Web Standards Project to build standards. A List Apart and books like Designing with Web Standards by Jeffrey Zeldman played key roles in teaching developers and designers why standards are important, how to implement them, and how to sell them to their organizations. And approaches like progressive enhancement introduced the idea that content should be available for all browsers—with additional enhancements available for more advanced browsers. Meanwhile, sites like the CSS Zen Garden showcased just how powerful and versatile CSS can be when combined with a solid semantic HTML structure.

Server-side languages like PHP, Java, and .NET overtook Perl as the predominant back-end processors, and the cgi-bin was tossed in the trash bin. With these better server-side tools came the first era of web applications, starting with content-management systems (particularly in the blogging space with tools like Blogger, Grey Matter, Movable Type, and WordPress). In the mid-2000s, AJAX opened doors for asynchronous interaction between the front end and back end. Suddenly, pages could update their content without needing to reload. A crop of JavaScript frameworks like Prototype, YUI, and jQuery arose to help developers build more reliable client-side interaction across browsers that had wildly varying levels of standards support. Techniques like image replacement let crafty designers and developers display fonts of their choosing. And technologies like Flash made it possible to add animations, games, and even more interactivity.

These new technologies, standards, and techniques reinvigorated the industry in many ways. Web design flourished as designers and developers explored more diverse styles and layouts. But we still relied on tons of hacks. Early CSS was a huge improvement over table-based layouts when it came to basic layout and text styling, but its limitations at the time meant that designers and developers still relied heavily on images for complex shapes (such as rounded or angled corners) and tiled backgrounds for the appearance of full-length columns (among other hacks). Complicated layouts required all manner of nested floats or absolute positioning (or both). Flash and image replacement for custom fonts was a great start toward varying the typefaces from the big five, but both hacks introduced accessibility and performance problems. And JavaScript libraries made it easy for anyone to add a dash of interaction to pages, although at the cost of doubling or even quadrupling the download size of simple websites.

The web as software platform

The symbiosis between the front end and back end continued to improve, and that led to the current era of modern web applications. Between expanded server-side programming languages (which kept growing to include Ruby, Python, Go, and others) and newer front-end tools like React, Vue, and Angular, we could build fully capable software on the web. Alongside these tools came others, including collaborative version control, build automation, and shared package libraries. What was once primarily an environment for linked documents became a realm of infinite possibilities.

At the same time, mobile devices became more capable, and they gave us internet access in our pockets. Mobile apps and responsive design opened up opportunities for new interactions anywhere and any time.

This combination of capable mobile devices and powerful development tools contributed to the waxing of social media and other centralized tools for people to connect and consume. As it became easier and more common to connect with others directly on Twitter, Facebook, and even Slack, the desire for hosted personal sites waned. Social media offered connections on a global scale, with both the good and bad that that entails.

Want a much more extensive history of how we got here, with some other takes on ways that we can improve? Jeremy Keith wrote “Of Time and the Web.” Or check out the “Web Design History Timeline” at the Web Design Museum. Neal Agarwal also has a fun tour through “Internet Artifacts.”

Where we are now

In the last couple of years, it’s felt like we’ve begun to reach another major inflection point. As social-media platforms fracture and wane, there’s been a growing interest in owning our own content again. There are many different ways to make a website, from the tried-and-true classic of hosting plain HTML files to static site generators to content management systems of all flavors. The fracturing of social media also comes with a cost: we lose crucial infrastructure for discovery and connection. Webmentions, RSS, ActivityPub, and other tools of the IndieWeb can help with this, but they’re still relatively underimplemented and hard to use for the less nerdy. We can build amazing personal websites and add to them regularly, but without discovery and connection, it can sometimes feel like we may as well be shouting into the void.

Browser support for CSS, JavaScript, and other standards like web components has accelerated, especially through efforts like Interop. New technologies gain support across the board in a fraction of the time that they used to. I often learn about a new feature and check its browser support only to find that its coverage is already above 80 percent. Nowadays, the barrier to using newer techniques often isn’t browser support but simply the limits of how quickly designers and developers can learn what’s available and how to adopt it.

Today, with a few commands and a couple of lines of code, we can prototype almost any idea. All the tools that we now have available make it easier than ever to start something new. But the upfront cost that these frameworks may save in initial delivery eventually comes due as upgrading and maintaining them becomes a part of our technical debt.

If we rely on third-party frameworks, adopting new standards can sometimes take longer since we may have to wait for those frameworks to adopt those standards. These frameworks—which used to let us adopt new techniques sooner—have now become hindrances instead. These same frameworks often come with performance costs too, forcing users to wait for scripts to load before they can read or interact with pages. And when scripts fail (whether through poor code, network issues, or other environmental factors), there’s often no alternative, leaving users with blank or broken pages.

Where do we go from here?

Today’s hacks help to shape tomorrow’s standards. And there’s nothing inherently wrong with embracing hacks—for now—to move the present forward. Problems only arise when we’re unwilling to admit that they’re hacks or we hesitate to replace them. So what can we do to create the future we want for the web?

Build for the long haul. Optimize for performance, for accessibility, and for the user. Weigh the costs of those developer-friendly tools. They may make your job a little easier today, but how do they affect everything else? What’s the cost to users? To future developers? To standards adoption? Sometimes the convenience may be worth it. Sometimes it’s just a hack that you’ve grown accustomed to. And sometimes it’s holding you back from even better options.

Start from standards. Standards continue to evolve over time, but browsers have done a remarkably good job of continuing to support older standards. The same isn’t always true of third-party frameworks. Sites built with even the hackiest of HTML from the ’90s still work just fine today. The same can’t always be said of sites built with frameworks even after just a couple years.

Design with care. Whether your craft is code, pixels, or processes, consider the impacts of each decision. The convenience of many a modern tool comes at the cost of not always understanding the underlying decisions that have led to its design and not always considering the impact that those decisions can have. Rather than rushing headlong to “move fast and break things,” use the time saved by modern tools to consider more carefully and design with deliberation.

Always be learning. If you’re always learning, you’re also growing. Sometimes it may be hard to pinpoint what’s worth learning and what’s just today’s hack. You might end up focusing on something that won’t matter next year, even if you were to focus solely on learning standards. (Remember XHTML?) But constant learning opens up new connections in your brain, and the hacks that you learn one day may help to inform different experiments another day.

Play, experiment, and be weird! This web that we’ve built is the ultimate experiment. It’s the single largest human endeavor in history, and yet each of us can create our own pocket within it. Be courageous and try new things. Build a playground for ideas. Make goofy experiments in your own mad science lab. Start your own small business. There has never been a more empowering place to be creative, take risks, and explore what we’re capable of.

Share and amplify. As you experiment, play, and learn, share what’s worked for you. Write on your own website, post on whichever social media site you prefer, or shout it from a TikTok. Write something for A List Apart! But take the time to amplify others too: find new voices, learn from them, and share what they’ve taught you.

Go forth and make

As designers and developers for the web (and beyond), we’re responsible for building the future every day, whether that may take the shape of personal websites, social media tools used by billions, or anything in between. Let’s imbue our values into the things that we create, and let’s make the web a better place for everyone. Create that thing that only you are uniquely qualified to make. Then share it, make it better, make it again, or make something new. Learn. Make. Share. Grow. Rinse and repeat. Every time you think that you’ve mastered the web, everything will change.

To Ignite a Personalization Practice, Run this Prepersonalization Workshop

Picture this. You’ve joined a squad at your company that’s designing new product features with an emphasis on automation or AI. Or your company has just implemented a personalization engine. Either way, you’re designing with data. Now what? When it comes to designing for personalization, there are many cautionary tales, no overnight successes, and few guides for the perplexed. 

Between the fantasy of getting it right and the fear of it going wrong—like when we encounter “persofails” in the vein of a company repeatedly imploring everyday consumers to buy additional toilet seats—the personalization gap is real. It’s an especially confounding place to be a digital professional without a map, a compass, or a plan.

For those of you venturing into personalization, there’s no Lonely Planet and few tour guides because effective personalization is so specific to each organization’s talent, technology, and market position. 

But you can ensure that your team has packed its bags sensibly.

There’s a DIY formula to increase your chances for success. At minimum, you’ll defuse your boss’s irrational exuberance. Before the party you’ll need to effectively prepare.

We call it prepersonalization.

Behind the music

Consider Spotify’s DJ feature, which debuted this past year.

We’re used to seeing the polished final result of a personalization feature. Before the year-end award, the making-of backstory, or the behind-the-scenes victory lap, a personalized feature had to be conceived, budgeted, and prioritized. Before any personalization feature goes live in your product or service, it lives amid a backlog of worthy ideas for expressing customer experiences more dynamically.

So how do you know where to place your personalization bets? How do you design consistent interactions that won’t trip up users or—worse—breed mistrust? We’ve found that for many budgeted programs to justify their ongoing investments, they first needed one or more workshops to convene key stakeholders and internal customers of the technology. Make yours count.

​From Big Tech to fledgling startups, we’ve seen the same evolution up close with our clients. In our experiences with working on small and large personalization efforts, a program’s ultimate track record—and its ability to weather tough questions, work steadily toward shared answers, and organize its design and technology efforts—turns on how effectively these prepersonalization activities play out.

Time and again, we’ve seen effective workshops separate future success stories from unsuccessful efforts, saving countless time, resources, and collective well-being in the process.

A personalization practice involves a multiyear effort of testing and feature development. It’s not a switch-flip moment in your tech stack. It’s best managed as a backlog that often evolves through three steps: 

  1. customer experience optimization (CXO, also known as A/B testing or experimentation)
  2. always-on automations (whether rules-based or machine-generated)
  3. mature features or standalone product development (such as Spotify’s DJ experience)

This is why we created our progressive personalization framework and why we’re field-testing an accompanying deck of cards: we believe that there’s a base grammar, a set of “nouns and verbs” that your organization can use to design experiences that are customized, personalized, or automated. You won’t need these cards. But we strongly recommend that you create something similar, whether that might be digital or physical.

Set your kitchen timer

How long does it take to cook up a prepersonalization workshop? The surrounding assessment activities that we recommend including can (and often do) span weeks. For the core workshop, we recommend aiming for two to three days. Here’s a summary of our broader approach along with details on the essential first-day activities.

The full arc of the wider workshop is threefold:

  1. Kickstart: This sets the terms of engagement as you focus on the opportunity as well as the readiness and drive of your team and your leadership. .
  2. Plan your work: This is the heart of the card-based workshop activities where you specify a plan of attack and the scope of work.
  3. Work your plan: This phase is all about creating a competitive environment for team participants to individually pitch their own pilots that each contain a proof-of-concept project, its business case, and its operating model.

Give yourself at least a day, split into two large time blocks, to power through a concentrated version of those first two phases.

Kickstart: Whet your appetite

We call the first lesson the “landscape of connected experience.” It explores the personalization possibilities in your organization. A connected experience, in our parlance, is any UX requiring the orchestration of multiple systems of record on the backend. This could be a content-management system combined with a marketing-automation platform. It could be a digital-asset manager combined with a customer-data platform.

Spark conversation by naming consumer examples and business-to-business examples of connected experience interactions that you admire, find familiar, or even dislike. This should cover a representative range of personalization patterns, including automated app-based interactions (such as onboarding sequences or wizards), notifications, and recommenders. We have a catalog of these in the cards. Here’s a list of 142 different interactions to jog your thinking.

This is all about setting the table. What are the possible paths for the practice in your organization? If you want a broader view, here’s a long-form primer and a strategic framework.

Assess each example that you discuss for its complexity and the level of effort that you estimate that it would take for your team to deliver that feature (or something similar). In our cards, we divide connected experiences into five levels: functions, features, experiences, complete products, and portfolios. Size your own build here. This will help to focus the conversation on the merits of ongoing investment as well as the gap between what you deliver today and what you want to deliver in the future.

Next, have your team plot each idea on the following 2×2 grid, which lays out the four enduring arguments for a personalized experience. This is critical because it emphasizes how personalization can not only help your external customers but also affect your own ways of working. It’s also a reminder (which is why we used the word argument earlier) of the broader effort beyond these tactical interventions.

Each team member should vote on where they see your product or service putting its emphasis. Naturally, you can’t prioritize all of them. The intention here is to flesh out how different departments may view their own upsides to the effort, which can vary from one to the next. Documenting your desired outcomes lets you know how the team internally aligns across representatives from different departments or functional areas.

The third and final kickstart activity is about naming your personalization gap. Is your customer journey well documented? Will data and privacy compliance be too big of a challenge? Do you have content metadata needs that you have to address? (We’re pretty sure that you do: it’s just a matter of recognizing the relative size of that need and its remedy.) In our cards, we’ve noted a number of program risks, including common team dispositions. Our Detractor card, for example, lists six stakeholder behaviors that hinder progress.

Effectively collaborating and managing expectations is critical to your success. Consider the potential barriers to your future progress. Press the participants to name specific steps to overcome or mitigate those barriers in your organization. As studies have shown, personalization efforts face many common barriers.

At this point, you’ve hopefully discussed sample interactions, emphasized a key area of benefit, and flagged key gaps? Good—you’re ready to continue.

Hit that test kitchen

Next, let’s look at what you’ll need to bring your personalization recipes to life. Personalization engines, which are robust software suites for automating and expressing dynamic content, can intimidate new customers. Their capabilities are sweeping and powerful, and they present broad options for how your organization can conduct its activities. This presents the question: Where do you begin when you’re configuring a connected experience?

What’s important here is to avoid treating the installed software like it were a dream kitchen from some fantasy remodeling project (as one of our client executives memorably put it). These software engines are more like test kitchens where your team can begin devising, tasting, and refining the snacks and meals that will become a part of your personalization program’s regularly evolving menu.

The ultimate menu of the prioritized backlog will come together over the course of the workshop. And creating “dishes” is the way that you’ll have individual team stakeholders construct personalized interactions that serve their needs or the needs of others.

The dishes will come from recipes, and those recipes have set ingredients.

Verify your ingredients

Like a good product manager, you’ll make sure—andyou’ll validate with the right stakeholders present—that you have all the ingredients on hand to cook up your desired interaction (or that you can work out what needs to be added to your pantry). These ingredients include the audience that you’re targeting, content and design elements, the context for the interaction, and your measure for how it’ll come together. 

This isn’t just about discovering requirements. Documenting your personalizations as a series of if-then statements lets the team: 

  1. compare findings toward a unified approach for developing features, not unlike when artists paint with the same palette; 
  2. specify a consistent set of interactions that users find uniform or familiar; 
  3. and develop parity across performance measurements and key performance indicators too. 

This helps you streamline your designs and your technical efforts while you deliver a shared palette of core motifs of your personalized or automated experience.

Compose your recipe

What ingredients are important to you? Think of a who-what-when-why construct

  • Who are your key audience segments or groups?
  • What kind of content will you give them, in what design elements, and under what circumstances?
  • And for which business and user benefits?

We first developed these cards and card categories five years ago. We regularly play-test their fit with conference audiences and clients. And we still encounter new possibilities. But they all follow an underlying who-what-when-why logic.

Here are three examples for a subscription-based reading app, which you can generally follow along with right to left in the cards in the accompanying photo below. 

  1. Nurture personalization: When a guest or an unknown visitor interacts with  a product title, a banner or alert bar appears that makes it easier for them to encounter a related title they may want to read, saving them time.
  2. Welcome automation: When there’s a newly registered user, an email is generated to call out the breadth of the content catalog and to make them a happier subscriber.
  3. Winback automation: Before their subscription lapses or after a recent failed renewal, a user is sent an email that gives them a promotional offer to suggest that they reconsider renewing or to remind them to renew.

A useful preworkshop activity may be to think through a first draft of what these cards might be for your organization, although we’ve also found that this process sometimes flows best through cocreating the recipes themselves. Start with a set of blank cards, and begin labeling and grouping them through the design process, eventually distilling them to a refined subset of highly useful candidate cards.

You can think of the later stages of the workshop as moving from recipes toward a cookbook in focus—like a more nuanced customer-journey mapping. Individual “cooks” will pitch their recipes to the team, using a common jobs-to-be-done format so that measurability and results are baked in, and from there, the resulting collection will be prioritized for finished design and delivery to production.

Better kitchens require better architecture

Simplifying a customer experience is a complicated effort for those who are inside delivering it. Beware anyone who says otherwise. With that being said,  “Complicated problems can be hard to solve, but they are addressable with rules and recipes.”

When personalization becomes a laugh line, it’s because a team is overfitting: they aren’t designing with their best data. Like a sparse pantry, every organization has metadata debt to go along with its technical debt, and this creates a drag on personalization effectiveness. Your AI’s output quality, for example, is indeed limited by your IA. Spotify’s poster-child prowess today was unfathomable before they acquired a seemingly modest metadata startup that now powers its underlying information architecture.

You can definitely stand the heat…

Personalization technology opens a doorway into a confounding ocean of possible designs. Only a disciplined and highly collaborative approach will bring about the necessary focus and intention to succeed. So banish the dream kitchen. Instead, hit the test kitchen to save time, preserve job satisfaction and security, and safely dispense with the fanciful ideas that originate upstairs of the doers in your organization. There are meals to serve and mouths to feed.

This workshop framework gives you a fighting shot at lasting success as well as sound beginnings. Wiring up your information layer isn’t an overnight affair. But if you use the same cookbook and shared recipes, you’ll have solid footing for success. We designed these activities to make your organization’s needs concrete and clear, long before the hazards pile up.

While there are associated costs toward investing in this kind of technology and product design, your ability to size up and confront your unique situation and your digital capabilities is time well spent. Don’t squander it. The proof, as they say, is in the pudding.

User Research Is Storytelling

Ever since I was a boy, I’ve been fascinated with movies. I loved the characters and the excitement—but most of all the stories. I wanted to be an actor. And I believed that I’d get to do the things that Indiana Jones did and go on exciting adventures. I even dreamed up ideas for movies that my friends and I could make and star in. But they never went any further. I did, however, end up working in user experience (UX). Now, I realize that there’s an element of theater to UX—I hadn’t really considered it before, but user research is storytelling. And to get the most out of user research, you need to tell a good story where you bring stakeholders—the product team and decision makers—along and get them interested in learning more.

Think of your favorite movie. More than likely it follows a three-act structure that’s commonly seen in storytelling: the setup, the conflict, and the resolution. The first act shows what exists today, and it helps you get to know the characters and the challenges and problems that they face. Act two introduces the conflict, where the action is. Here, problems grow or get worse. And the third and final act is the resolution. This is where the issues are resolved and the characters learn and change. I believe that this structure is also a great way to think about user research, and I think that it can be especially helpful in explaining user research to others.

Use storytelling as a structure to do research

It’s sad to say, but many have come to see research as being expendable. If budgets or timelines are tight, research tends to be one of the first things to go. Instead of investing in research, some product managers rely on designers or—worse—their own opinion to make the “right” choices for users based on their experience or accepted best practices. That may get teams some of the way, but that approach can so easily miss out on solving users’ real problems. To remain user-centered, this is something we should avoid. User research elevates design. It keeps it on track, pointing to problems and opportunities. Being aware of the issues with your product and reacting to them can help you stay ahead of your competitors.

In the three-act structure, each act corresponds to a part of the process, and each part is critical to telling the whole story. Let’s look at the different acts and how they align with user research.

Act one: setup

The setup is all about understanding the background, and that’s where foundational research comes in. Foundational research (also called generative, discovery, or initial research) helps you understand users and identify their problems. You’re learning about what exists today, the challenges users have, and how the challenges affect them—just like in the movies. To do foundational research, you can conduct contextual inquiries or diary studies (or both!), which can help you start to identify problems as well as opportunities. It doesn’t need to be a huge investment in time or money.

Erika Hall writes about minimum viable ethnography, which can be as simple as spending 15 minutes with a user and asking them one thing: “‘Walk me through your day yesterday.’ That’s it. Present that one request. Shut up and listen to them for 15 minutes. Do your damndest to keep yourself and your interests out of it. Bam, you’re doing ethnography.” According to Hall, [This] will probably prove quite illuminating. In the highly unlikely case that you didn’t learn anything new or useful, carry on with enhanced confidence in your direction.”  

This makes total sense to me. And I love that this makes user research so accessible. You don’t need to prepare a lot of documentation; you can just recruit participants and do it! This can yield a wealth of information about your users, and it’ll help you better understand them and what’s going on in their lives. That’s really what act one is all about: understanding where users are coming from. 

Jared Spool talks about the importance of foundational research and how it should form the bulk of your research. If you can draw from any additional user data that you can get your hands on, such as surveys or analytics, that can supplement what you’ve heard in the foundational studies or even point to areas that need further investigation. Together, all this data paints a clearer picture of the state of things and all its shortcomings. And that’s the beginning of a compelling story. It’s the point in the plot where you realize that the main characters—or the users in this case—are facing challenges that they need to overcome. Like in the movies, this is where you start to build empathy for the characters and root for them to succeed. And hopefully stakeholders are now doing the same. Their sympathy may be with their business, which could be losing money because users can’t complete certain tasks. Or maybe they do empathize with users’ struggles. Either way, act one is your initial hook to get the stakeholders interested and invested.

Once stakeholders begin to understand the value of foundational research, that can open doors to more opportunities that involve users in the decision-making process. And that can guide product teams toward being more user-centered. This benefits everyone—users, the product, and stakeholders. It’s like winning an Oscar in movie terms—it often leads to your product being well received and successful. And this can be an incentive for stakeholders to repeat this process with other products. Storytelling is the key to this process, and knowing how to tell a good story is the only way to get stakeholders to really care about doing more research. 

This brings us to act two, where you iteratively evaluate a design or concept to see whether it addresses the issues.

Act two: conflict

Act two is all about digging deeper into the problems that you identified in act one. This usually involves directional research, such as usability tests, where you assess a potential solution (such as a design) to see whether it addresses the issues that you found. The issues could include unmet needs or problems with a flow or process that’s tripping users up. Like act two in a movie, more issues will crop up along the way. It’s here that you learn more about the characters as they grow and develop through this act. 

Usability tests should typically include around five participants according to Jakob Nielsen, who found that that number of users can usually identify most of the problems: “As you add more and more users, you learn less and less because you will keep seeing the same things again and again… After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.” 

There are parallels with storytelling here too; if you try to tell a story with too many characters, the plot may get lost. Having fewer participants means that each user’s struggles will be more memorable and easier to relay to other stakeholders when talking about the research. This can help convey the issues that need to be addressed while also highlighting the value of doing the research in the first place.

Researchers have run usability tests in person for decades, but you can also conduct usability tests remotely using tools like Microsoft Teams, Zoom, or other teleconferencing software. This approach has become increasingly popular since the beginning of the pandemic, and it works well. You can think of in-person usability tests like going to a play and remote sessions as more like watching a movie. There are advantages and disadvantages to each. In-person usability research is a much richer experience. Stakeholders can experience the sessions with other stakeholders. You also get real-time reactions—including surprise, agreement, disagreement, and discussions about what they’re seeing. Much like going to a play, where audiences get to take in the stage, the costumes, the lighting, and the actors’ interactions, in-person research lets you see users up close, including their body language, how they interact with the moderator, and how the scene is set up.

If in-person usability testing is like watching a play—staged and controlled—then conducting usability testing in the field is like immersive theater where any two sessions might be very different from one another. You can take usability testing into the field by creating a replica of the space where users interact with the product and then conduct your research there. Or you can go out to meet users at their location to do your research. With either option, you get to see how things work in context, things come up that wouldn’t have in a lab environment—and conversion can shift in entirely different directions. As researchers, you have less control over how these sessions go, but this can sometimes help you understand users even better. Meeting users where they are can provide clues to the external forces that could be affecting how they use your product. In-person usability tests provide another level of detail that’s often missing from remote usability tests. 

That’s not to say that the “movies”—remote sessions—aren’t a good option. Remote sessions can reach a wider audience. They allow a lot more stakeholders to be involved in the research and to see what’s going on. And they open the doors to a much wider geographical pool of users. But with any remote session there is the potential of time wasted if participants can’t log in or get their microphone working. 

The benefit of usability testing, whether remote or in person, is that you get to see real users interact with the designs in real time, and you can ask them questions to understand their thought processes and grasp of the solution. This can help you not only identify problems but also glean why they’re problems in the first place. Furthermore, you can test hypotheses and gauge whether your thinking is correct. By the end of the sessions, you’ll have a much clearer picture of how usable the designs are and whether they work for their intended purposes. Act two is the heart of the story—where the excitement is—but there can be surprises too. This is equally true of usability tests. Often, participants will say unexpected things, which change the way that you look at things—and these twists in the story can move things in new directions. 

Unfortunately, user research is sometimes seen as expendable. And too often usability testing is the only research process that some stakeholders think that they ever need. In fact, if the designs that you’re evaluating in the usability test aren’t grounded in a solid understanding of your users (foundational research), there’s not much to be gained by doing usability testing in the first place. That’s because you’re narrowing the focus of what you’re getting feedback on, without understanding the users’ needs. As a result, there’s no way of knowing whether the designs might solve a problem that users have. It’s only feedback on a particular design in the context of a usability test.  

On the other hand, if you only do foundational research, while you might have set out to solve the right problem, you won’t know whether the thing that you’re building will actually solve that. This illustrates the importance of doing both foundational and directional research. 

In act two, stakeholders will—hopefully—get to watch the story unfold in the user sessions, which creates the conflict and tension in the current design by surfacing their highs and lows. And in turn, this can help motivate stakeholders to address the issues that come up.

Act three: resolution

While the first two acts are about understanding the background and the tensions that can propel stakeholders into action, the third part is about resolving the problems from the first two acts. While it’s important to have an audience for the first two acts, it’s crucial that they stick around for the final act. That means the whole product team, including developers, UX practitioners, business analysts, delivery managers, product managers, and any other stakeholders that have a say in the next steps. It allows the whole team to hear users’ feedback together, ask questions, and discuss what’s possible within the project’s constraints. And it lets the UX research and design teams clarify, suggest alternatives, or give more context behind their decisions. So you can get everyone on the same page and get agreement on the way forward.

This act is mostly told in voiceover with some audience participation. The researcher is the narrator, who paints a picture of the issues and what the future of the product could look like given the things that the team has learned. They give the stakeholders their recommendations and their guidance on creating this vision.

Nancy Duarte in the Harvard Business Review offers an approach to structuring presentations that follow a persuasive story. “The most effective presenters use the same techniques as great storytellers: By reminding people of the status quo and then revealing the path to a better way, they set up a conflict that needs to be resolved,” writes Duarte. “That tension helps them persuade the audience to adopt a new mindset or behave differently.”

This type of structure aligns well with research results, and particularly results from usability tests. It provides evidence for “what is”—the problems that you’ve identified. And “what could be”—your recommendations on how to address them. And so on and so forth.

You can reinforce your recommendations with examples of things that competitors are doing that could address these issues or with examples where competitors are gaining an edge. Or they can be visual, like quick mockups of how a new design could look that solves a problem. These can help generate conversation and momentum. And this continues until the end of the session when you’ve wrapped everything up in the conclusion by summarizing the main issues and suggesting a way forward. This is the part where you reiterate the main themes or problems and what they mean for the product—the denouement of the story. This stage gives stakeholders the next steps and hopefully the momentum to take those steps!

While we are nearly at the end of this story, let’s reflect on the idea that user research is storytelling. All the elements of a good story are there in the three-act structure of user research: 

  • Act one: You meet the protagonists (the users) and the antagonists (the problems affecting users). This is the beginning of the plot. In act one, researchers might use methods including contextual inquiry, ethnography, diary studies, surveys, and analytics. The output of these methods can include personas, empathy maps, user journeys, and analytics dashboards.
  • Act two: Next, there’s character development. There’s conflict and tension as the protagonists encounter problems and challenges, which they must overcome. In act two, researchers might use methods including usability testing, competitive benchmarking, and heuristics evaluation. The output of these can include usability findings reports, UX strategy documents, usability guidelines, and best practices.
  • Act three: The protagonists triumph and you see what a better future looks like. In act three, researchers may use methods including presentation decks, storytelling, and digital media. The output of these can be: presentation decks, video clips, audio clips, and pictures. 

The researcher has multiple roles: they’re the storyteller, the director, and the producer. The participants have a small role, but they are significant characters (in the research). And the stakeholders are the audience. But the most important thing is to get the story right and to use storytelling to tell users’ stories through research. By the end, the stakeholders should walk away with a purpose and an eagerness to resolve the product’s ills. 

So the next time that you’re planning research with clients or you’re speaking to stakeholders about research that you’ve done, think about how you can weave in some storytelling. Ultimately, user research is a win-win for everyone, and you just need to get stakeholders interested in how the story ends.

From Beta to Bedrock: Build Products that Stick.

As a product builder over too many years to mention, I’ve lost count of the number of times I’ve seen promising ideas go from zero to hero in a few weeks, only to fizzle out within months.

Financial products, which is the field I work in, are no exception. With people’s real hard-earned money on the line, user expectations running high, and a crowded market, it’s tempting to throw as many features at the wall as possible and hope something sticks. But this approach is a recipe for disaster. Here’s why:

The pitfalls of feature-first development

When you start building a financial product from the ground up, or are migrating existing customer journeys from paper or telephony channels onto online banking or mobile apps, it’s easy to get caught up in the excitement of creating new features. You might think, “If I can just add one more thing that solves this particular user problem, they’ll love me!” But what happens when you inevitably hit a roadblock because the narcs (your security team!) don’t like it? When a hard-fought feature isn’t as popular as you thought, or it breaks due to unforeseen complexity?

This is where the concept of Minimum Viable Product (MVP) comes in. Jason Fried’s book Getting Real and his podcast Rework often touch on this idea, even if he doesn’t always call it that. An MVP is a product that provides just enough value to your users to keep them engaged, but not so much that it becomes overwhelming or difficult to maintain. It sounds like an easy concept but it requires a razor sharp eye, a ruthless edge and having the courage to stick by your opinion because it is easy to be seduced by “the Columbo Effect”… when there’s always “just one more thing…” that someone wants to add.

The problem with most finance apps, however, is that they often become a reflection of the internal politics of the business rather than an experience solely designed around the customer. This means that the focus is on delivering as many features and functionalities as possible to satisfy the needs and desires of competing internal departments, rather than providing a clear value proposition that is focused on what the people out there in the real world want. As a result, these products can very easily bloat to become a mixed bag of confusing, unrelated and ultimately unlovable customer experiences—a feature salad, you might say.

The importance of bedrock

So what’s a better approach? How can we build products that are stable, user-friendly, and—most importantly—stick?

That’s where the concept of “bedrock” comes in. Bedrock is the core element of your product that truly matters to users. It’s the fundamental building block that provides value and stays relevant over time.

In the world of retail banking, which is where I work, the bedrock has got to be in and around the regular servicing journeys. People open their current account once in a blue moon but they look at it every day. They sign up for a credit card every year or two, but they check their balance and pay their bill at least once a month.

Identifying the core tasks that people want to do and then relentlessly striving to make them easy to do, dependable, and trustworthy is where the gravy’s at.

But how do you get to bedrock? By focusing on the “MVP” approach, prioritizing simplicity, and iterating towards a clear value proposition. This means cutting out unnecessary features and focusing on delivering real value to your users.

It also means having some guts, because your colleagues might not always instantly share your vision to start with. And controversially, sometimes it can even mean making it clear to customers that you’re not going to come to their house and make their dinner. The occasional “opinionated user interface design” (i.e. clunky workaround for edge cases) might sometimes be what you need to use to test a concept or buy you space to work on something more important.

Practical strategies for building financial products that stick

So what are the key strategies I’ve learned from my own experience and research?

  1. Start with a clear “why”: What problem are you trying to solve? For whom? Make sure your mission is crystal clear before building anything. Make sure it aligns with your company’s objectives, too.
  2. Focus on a single, core feature and obsess on getting that right before moving on to something else: Resist the temptation to add too many features at once. Instead, choose one that delivers real value and iterate from there.
  3. Prioritize simplicity over complexity: Less is often more when it comes to financial products. Cut out unnecessary bells and whistles and keep the focus on what matters most.
  4. Embrace continuous iteration: Bedrock isn’t a fixed destination—it’s a dynamic process. Continuously gather user feedback, refine your product, and iterate towards that bedrock state.
  5. Stop, look and listen: Don’t just test your product as part of your delivery process—test it repeatedly in the field. Use it yourself. Run A/B tests. Gather user feedback. Talk to people who use it, and refine accordingly.

The bedrock paradox

There’s an interesting paradox at play here: building towards bedrock means sacrificing some short-term growth potential in favour of long-term stability. But the payoff is worth it—products built with a focus on bedrock will outlast and outperform their competitors, and deliver sustained value to users over time.

So, how do you start your journey towards bedrock? Take it one step at a time. Start by identifying those core elements that truly matter to your users. Focus on building and refining a single, powerful feature that delivers real value. And above all, test obsessively—for, in the words of Abraham Lincoln, Alan Kay, or Peter Drucker (whomever you believe!!), “The best way to predict the future is to create it.”

An Holistic Framework for Shared Design Leadership

Picture this: You’re in a meeting room at your tech company, and two people are having what looks like the same conversation about the same design problem. One is talking about whether the team has the right skills to tackle it. The other is diving deep into whether the solution actually solves the user’s problem. Same room, same problem, completely different lenses.

This is the beautiful, sometimes messy reality of having both a Design Manager and a Lead Designer on the same team. And if you’re wondering how to make this work without creating confusion, overlap, or the dreaded “too many cooks” scenario, you’re asking the right question.

The traditional answer has been to draw clean lines on an org chart. The Design Manager handles people, the Lead Designer handles craft. Problem solved, right? Except clean org charts are fantasy. In reality, both roles care deeply about team health, design quality, and shipping great work. 

The magic happens when you embrace the overlap instead of fighting it—when you start thinking of your design org as a design organism.

The Anatomy of a Healthy Design Team

Here’s what I’ve learned from years of being on both sides of this equation: think of your design team as a living organism. The Design Manager tends to the mind (the psychological safety, the career growth, the team dynamics). The Lead Designer tends to the body (the craft skills, the design standards, the hands-on work that ships to users).

But just like mind and body aren’t completely separate systems, so, too, do these roles overlap in important ways. You can’t have a healthy person without both working in harmony. The trick is knowing where those overlaps are and how to navigate them gracefully.

When we look at how healthy teams actually function, three critical systems emerge. Each requires both roles to work together, but with one taking primary responsibility for keeping that system strong.

The Nervous System: People & Psychology

Primary caretaker: Design Manager
Supporting role: Lead Designer

The nervous system is all about signals, feedback, and psychological safety. When this system is healthy, information flows freely, people feel safe to take risks, and the team can adapt quickly to new challenges.

The Design Manager is the primary caretaker here. They’re monitoring the team’s psychological pulse, ensuring feedback loops are healthy, and creating the conditions for people to grow. They’re hosting career conversations, managing workload, and making sure no one burns out.

But the Lead Designer plays a crucial supporting role. They’re providing sensory input about craft development needs, spotting when someone’s design skills are stagnating, and helping identify growth opportunities that the Design Manager might miss.

Design Manager tends to:

  • Career conversations and growth planning
  • Team psychological safety and dynamics
  • Workload management and resource allocation
  • Performance reviews and feedback systems
  • Creating learning opportunities

Lead Designer supports by:

  • Providing craft-specific feedback on team member development
  • Identifying design skill gaps and growth opportunities
  • Offering design mentorship and guidance
  • Signaling when team members are ready for more complex challenges

The Muscular System: Craft & Execution

Primary caretaker: Lead Designer
Supporting role: Design Manager

The muscular system is about strength, coordination, and skill development. When this system is healthy, the team can execute complex design work with precision, maintain consistent quality, and adapt their craft to new challenges.

The Lead Designer is the primary caretaker here. They’re setting design standards, providing craft coaching, and ensuring that shipping work meets the quality bar. They’re the ones who can tell you if a design decision is sound or if we’re solving the right problem.

But the Design Manager plays a crucial supporting role. They’re ensuring the team has the resources and support to do their best craft work, like proper nutrition and recovery time for an athlete.

Lead Designer tends to:

  • Definition of design standards and system usage
  • Feedback on what design work meets the standard
  • Experience direction for the product
  • Design decisions and product-wide alignment
  • Innovation and craft advancement

Design Manager supports by:

  • Ensuring design standards are understood and adopted across the team
  • Confirming experience direction is being followed
  • Supporting practices and systems that scale without bottlenecking
  • Facilitating design alignment across teams
  • Providing resources and removing obstacles to great craft work

The Circulatory System: Strategy & Flow

Shared caretakers: Both Design Manager and Lead Designer

The circulatory system is about how information, decisions, and energy flow through the team. When this system is healthy, strategic direction is clear, priorities are aligned, and the team can respond quickly to new opportunities or challenges.

This is where true partnership happens. Both roles are responsible for keeping the circulation strong, but they’re bringing different perspectives to the table.

Lead Designer contributes:

  • User needs are met by the product
  • Overall product quality and experience
  • Strategic design initiatives
  • Research-based user needs for each initiative

Design Manager contributes:

  • Communication to team and stakeholders
  • Stakeholder management and alignment
  • Cross-functional team accountability
  • Strategic business initiatives

Both collaborate on:

  • Co-creation of strategy with leadership
  • Team goals and prioritization approach
  • Organizational structure decisions
  • Success measures and frameworks

Keeping the Organism Healthy

The key to making this partnership sing is understanding that all three systems need to work together. A team with great craft skills but poor psychological safety will burn out. A team with great culture but weak craft execution will ship mediocre work. A team with both but poor strategic circulation will work hard on the wrong things.

Be Explicit About Which System You’re Tending

When you’re in a meeting about a design problem, it helps to acknowledge which system you’re primarily focused on. “I’m thinking about this from a team capacity perspective” (nervous system) or “I’m looking at this through the lens of user needs” (muscular system) gives everyone context for your input.

This isn’t about staying in your lane. It’s about being transparent as to which lens you’re using, so the other person knows how to best add their perspective.

Create Healthy Feedback Loops

The most successful partnerships I’ve seen establish clear feedback loops between the systems:

Nervous system signals to muscular system: “The team is struggling with confidence in their design skills” → Lead Designer provides more craft coaching and clearer standards.

Muscular system signals to nervous system: “The team’s craft skills are advancing faster than their project complexity” → Design Manager finds more challenging growth opportunities.

Both systems signal to circulatory system: “We’re seeing patterns in team health and craft development that suggest we need to adjust our strategic priorities.”

Handle Handoffs Gracefully

The most critical moments in this partnership are when something moves from one system to another. This might be when a design standard (muscular system) needs to be rolled out across the team (nervous system), or when a strategic initiative (circulatory system) needs specific craft execution (muscular system).

Make these transitions explicit. “I’ve defined the new component standards. Can you help me think through how to get the team up to speed?” or “We’ve agreed on this strategic direction. I’m going to focus on the specific user experience approach from here.”

Stay Curious, Not Territorial

The Design Manager who never thinks about craft, or the Lead Designer who never considers team dynamics, is like a doctor who only looks at one body system. Great design leadership requires both people to care about the whole organism, even when they’re not the primary caretaker.

This means asking questions rather than making assumptions. “What do you think about the team’s craft development in this area?” or “How do you see this impacting team morale and workload?” keeps both perspectives active in every decision.

When the Organism Gets Sick

Even with clear roles, this partnership can go sideways. Here are the most common failure modes I’ve seen:

System Isolation

The Design Manager focuses only on the nervous system and ignores craft development. The Lead Designer focuses only on the muscular system and ignores team dynamics. Both people retreat to their comfort zones and stop collaborating.

The symptoms: Team members get mixed messages, work quality suffers, morale drops.

The treatment: Reconnect around shared outcomes. What are you both trying to achieve? Usually it’s great design work that ships on time from a healthy team. Figure out how both systems serve that goal.

Poor Circulation

Strategic direction is unclear, priorities keep shifting, and neither role is taking responsibility for keeping information flowing.

The symptoms: Team members are confused about priorities, work gets duplicated or dropped, deadlines are missed.

The treatment: Explicitly assign responsibility for circulation. Who’s communicating what to whom? How often? What’s the feedback loop?

Autoimmune Response

One person feels threatened by the other’s expertise. The Design Manager thinks the Lead Designer is undermining their authority. The Lead Designer thinks the Design Manager doesn’t understand craft.

The symptoms: Defensive behavior, territorial disputes, team members caught in the middle.

The treatment: Remember that you’re both caretakers of the same organism. When one system fails, the whole team suffers. When both systems are healthy, the team thrives.

The Payoff

Yes, this model requires more communication. Yes, it requires both people to be secure enough to share responsibility for team health. But the payoff is worth it: better decisions, stronger teams, and design work that’s both excellent and sustainable.

When both roles are healthy and working well together, you get the best of both worlds: deep craft expertise and strong people leadership. When one person is out sick, on vacation, or overwhelmed, the other can help maintain the team’s health. When a decision requires both the people perspective and the craft perspective, you’ve got both right there in the room.

Most importantly, the framework scales. As your team grows, you can apply the same system thinking to new challenges. Need to launch a design system? Lead Designer tends to the muscular system (standards and implementation), Design Manager tends to the nervous system (team adoption and change management), and both tend to circulation (communication and stakeholder alignment).

The Bottom Line

The relationship between a Design Manager and Lead Designer isn’t about dividing territories. It’s about multiplying impact. When both roles understand they’re tending to different aspects of the same healthy organism, magic happens.

The mind and body work together. The team gets both the strategic thinking and the craft excellence they need. And most importantly, the work that ships to users benefits from both perspectives.

So the next time you’re in that meeting room, wondering why two people are talking about the same problem from different angles, remember: you’re watching shared leadership in action. And if it’s working well, both the mind and body of your design team are getting stronger.

Adapting Agencies for the AI Era

Adapting Agencies for the AI Era written by John Jantsch read more at Duct Tape Marketing

Listen to the full episode: Overview On this episode of the Duct Tape Marketing Podcast, John Jantsch interviews Brent Weaver, CEO of E2M Solutions—the leading provider of white label WordPress, SEO, content, and AI solutions for agencies. Brent shares his view from the front lines of agency evolution as AI, automation, and changing client expectations […]

Adapting Agencies for the AI Era written by John Jantsch read more at Duct Tape Marketing

Listen to the full episode:

Brent Weaver (1)Overview

On this episode of the Duct Tape Marketing Podcast, John Jantsch interviews Brent Weaver, CEO of E2M Solutions—the leading provider of white label WordPress, SEO, content, and AI solutions for agencies. Brent shares his view from the front lines of agency evolution as AI, automation, and changing client expectations reshape the digital marketing landscape. They dive into the real impact of AI on agencies, the future of marketing leadership, the enduring value of strategy over tactics, and why human expertise still matters more than ever.

About the Guest

Brent Weaver is the CEO of E2M Solutions, a top white label provider of WordPress, SEO, content, and AI solutions for digital marketing agencies. With deep experience both running and supporting agencies, Brent is a recognized voice on AI, agency growth, and the new skills required to thrive in a fast-changing industry.

Actionable Insights

  • AI is rapidly raising the bar—not just for agencies, but for clients who now expect faster, better results and more transparency.
  • The white label model is evolving fast, with providers like E2M embracing “AI first” internal training, education, and even offering fractional AI services to agencies.
  • The hype of AI often exceeds reality—experiments abound, but many projects never deliver, so agencies and business owners must remain adaptable and strategic.
  • There’s still no “all-in-one” AI marketing operating system, but the industry is heading toward more integrated, seamless solutions.
  • SEO is far from dead; but marketers must get creative, focus on proprietary expertise, and optimize for both LLMs and Google—especially for local businesses.
  • Human leadership and strategy are more vital than ever. AI makes agencies more competitive, but also increases client expectations and the need for specialization and niche expertise.
  • The human element remains central: The future belongs to those who can combine AI tools with strategic thinking, EQ, and deep client understanding.
  • Agencies—and marketers—need to retool, learn continuously, and be ready to lead and manage, not just “do.”

Great Moments (with Timestamps)

  • 01:10 – The Elephant in the Room: AI’s Impact on Agencies
    Brent shares how AI is changing agency operations, results, and client expectations.
  • 02:25 – White Labeling in the Age of AI
    How E2M is retooling with “AI First Saturdays,” fractional AI services, and ongoing education.
  • 04:46 – Will We Ever Get a True AI Marketing OS?
    The reality (and limits) of current AI tools and what’s coming next.
  • 06:18 – The Hype vs. Reality of AI Projects
    Why many AI initiatives fail—and why experimentation is still worth it.
  • 08:10 – Is SEO Dead?
    Brent’s take on what’s changed, what still works, and how local and LLM optimization are evolving.
  • 11:55 – Why Agencies Are Working Harder, Not Less
    AI may automate, but competition, complexity, and client demands are rising.
  • 13:31 – The Human Element and Future-Ready Skills
    Why strategy, specialization, and leadership will define the next era of agency growth.
  • 15:17 – AI Agents, Frictionless UX, and What’s Next
    How AI will reshape customer journeys, jobs, and digital marketing roles.
  • 18:17 – From Doing to Managing: Evolving Careers and Teams
    The growing need for strategic thinkers, EQ, and continuous learning.

Insights

“AI has raised the bar for agencies and clients alike—faster, better results are expected, but human expertise is still at the center.”

“There’s no magic all-in-one AI solution yet, but those who combine tools with strategy and leadership will win.”

“SEO is evolving, not dying—marketers must focus on unique value, local search, and optimizing for new AI-driven experiences.”

“Agencies need to retool for an AI-first world, but the need for deep specialization, leadership, and EQ is greater than ever.”

“The future of digital marketing belongs to those who can marry the best of AI with strategy, creativity, and relentless learning.”

John Jantsch (00:01.405)

Hello and welcome to another episode of the Duct Tape Marketing Podcast. This is John Jantsch and my guest today is Brent Weaver. He is the CEO of E2M Solutions, the leading provider of white label WordPress SEO content and AI solutions for digital marketing agencies. So guess what we’re going to talk about today? We’re going to talk about agencies and we’re going to talk about digital marketing. So Brent, welcome to the show.

Brent At E2M (00:28.728)

Great to be here. Thanks, John.

John Jantsch (00:30.427)

I did get the title right. Didn’t I? You’re the CEO currently. Yeah. Okay. I was.

Brent At E2M (00:34.446)

Yeah, yeah. Joined E2M in June of 2025. So I’m wrapping up my ninth week on duty. So it’s been a new adventure for me.

John Jantsch (00:40.198)

Yeah.

Ha ha ha.

John Jantsch (00:47.197)

Well, E2M is not new necessarily, so it worked with hundreds of agencies. Just in your time and what you’ve learned or from the folks there, what do you see as some of the biggest changes in the agency landscape right now? And I know it’s evolving rapidly, but I’m curious what you’re hearing because you pretty much talk to agencies all day long.

Brent At E2M (01:10.722)

Yeah. I mean, the obvious elephant in the room is artificial intelligence and what that’s doing both in terms of how agencies are run and also how they’re deploying services and also how clients are expecting, you know, what the clients are doing with AI as well. So it’s not just like the agency using it, but the clients are using it. So I think some expectations are changing and also speed to results is changing because a client might say, well, Hey, if I can just have AI do this in

three minutes, right? Like, why is it gonna take you three or four days and just kind of working on how to up level your level, know, what you’re doing for your clients in terms of results. mean, that bar has certainly been raising very, very quickly in terms of what expectations are. And so I think a lot of agencies are feeling a little bit of squeeze, but at the same time, they’re feeling a lot of excitement. So there’s that whole topic, yeah.

John Jantsch (02:00.833)

Well, so to tag, I was going to say to tag onto that though, of course, your primary function is to, in most cases, act as a white label support for that agency. So I’m curious, has the white label mode evolved? mean, how are you, because it’s affecting agencies. So how’s it then in turn affecting what a white label provider like yourself is doing?

Brent At E2M (02:25.614)

And we’ve really like planted a flag that we want to be an AI first agency. And so we are doing lots of internal, kind of retooling education. have a thing called AI for Saturday where our whole company comes in every first Saturday of the month. some of that time has been dedicated to education, working on projects kind of, you know, within the teams doing demo days, hackathons. and so we’re definitely taking AI very seriously. Our team’s taking AI very seriously.

we’re also doing fractional AI services for agencies. So actually going in and helping the agency implement AI solutions. And so I think it’s, you know, people want better results. They want them faster, right? It’s kind of like, you know, dealing with Amazon, right? Like people used to think, I ordered something on the internet. It’s okay that it takes seven to 10 days to show up, right? But in the post Amazon world, you’re like, well, like my kids, for example, they’re like, why I ordered this

30 minutes ago, why is it not already at the front door? I think some of that impatience is seeping its way into the business to business service model. And I also think some of that kind of what people expect in terms of a customer centric centered business, like Amazon will give you refunds on just about anything. think customers are expecting some of that for some of their agency customers, but you know, it’s certainly having a huge impact on the overall industry.

John Jantsch (03:40.967)

Yeah.

John Jantsch (03:49.917)

So, since you opened the AI can of worms, I’ll go there directly. You know, what I’m seeing a lot of people do is, you know, it’s like we’re in this wild west days still where there’s 473 tools. People are hacking this $20 a month thing together with this $20 a month thing. They’re talking about agents and what they can do. What I’m seeing on the business side, the small business side, it’s like, okay, I get it. I get it. We need to do AI.

but this is exhausting. And, you know, is, there ever going to be a day you think where some, a business owner can actually buy the full like marketing operating system that is AI run and installed in their business and not, you know, have to lean on their agency to do this and an SEO person to do that with AI to me. And again, I’m, just asking your opinion because it doesn’t exist today, but, but I feel like that’s where we’re going to go.

Brent At E2M (04:42.083)

Yeah.

John Jantsch (04:46.237)

There’s going to be the $5.99 a month solution that’s sort of an all in one as opposed to custom this and custom that and custom, you know, whatever.

Brent At E2M (04:56.494)

And perhaps, you know, I think John that the more I’ve gotten into AI personally and the more Like projects and use cases that I’ve seen it. It’s it’s like the more, you know The more you realize you don’t know and I certainly think that that’s true with AI and and we’re seeing a lot of people do some what I would almost consider to be magical things with AI but then there’s also this like

maybe they do something where they don’t really have the foundational skillset. They’re using a tool like lovable and they’re doing vibe coding and they build an application that gets to a certain point. And then the client says, well, hey, we actually need this for a business requirement thing to do this other thing. And then all of a sudden that other thing maybe isn’t possible within the vibe coding interface. And all of a sudden you have this thing that an agency has spent weeks on in terms of a vibe coding application build.

John Jantsch (05:27.292)

Yeah, yeah.

Brent At E2M (05:49.6)

And then the thing that they needed to do from a business case is not possible within the AI. And so then we’re hitting this wall and we have to go, my gosh, we’re going to have to completely, we have to build this new ground up without vibe coding in order to make the business case work. I there was a study that said that something like 40 to 60 % of enterprise AI projects, and I should probably have a source on this of them quoted, but 40 to 60 % of AI projects at the enterprise level are being abandoned or never seen the light of day.

John Jantsch (06:18.609)

Yes.

Brent At E2M (06:18.85)

and if we all know, from reading wall street, like how much money is being invested in AI, that means like over half of the investment that major corporations are making are, basically being thrown in the trash. And I I’m seeing that same level of kind of experimentation, happened at the agency level and also at the small business owner level. And so I think there’s still, gosh, there’s just so much learning that has to be done. And the upside though, is you find an AI, automation or

agentic workflow that, that works, it gets to a hundred percent. it can have game changing impact on the business, right? Like the ROI on it can be, you know, infinity. And so it’s certainly worth making these investments, but it doesn’t mean that every investment is going to pay off.

John Jantsch (06:51.869)

Mm-hmm.

John Jantsch (06:57.223)

Peace.

Mm-hmm. Yes.

Yeah, I think one of the challenges in the the window that we’re in right now is, is in some cases, the hype of AI is actually outrunning the reality of it. And I think that a lot of people are like, we can fire everybody and do it all with AI. I mean, you see these people, you know, on Facebook ads, like, I have a $16 billion company and I only have two employees, you know, you’re like, you know,

Brent At E2M (07:13.848)

Sure.

Brent At E2M (07:26.982)

They like show this big like screenshot of all their automation crazy I’ve replaced 64 employees right like maybe I don’t know I mean usually I find when you kind of double click on those things and you go in there’s there’s usually some smoke and mirrors around those things but I don’t want to like bash anybody that’s

John Jantsch (07:31.482)

You

John Jantsch (07:35.909)

Yeah.

John Jantsch (07:41.341)

Well, they’re 100 % there is. And I think that, you know, it’s like all things. It’s kind of like taking advantage of the craze is actually making the reality a lot worse and a lot harder for that business owner that just needs a couple things, you know, figured out to, you know, to make, to not even to replace people, you know, but to actually empower their people in ways to do better and more work. All right. So I’ll get off my soapbox on that one and move to, let’s talk about SEO.

Brent At E2M (08:06.993)

Hahaha.

John Jantsch (08:10.727)

you know, which is a tactic, of course, a channel, if you will, that, that you guys play in quite a bit. There’s that’s another one of those where there’s, you know, like, it would take me about two minutes to find somebody who, who today put on LinkedIn SEO is dead. And so, you know, how are you, how are you, by the way, it’s not, but how are you advising clients? How are you changing even your routine and working around the reality that a lot of top of funnel

Brent At E2M (08:10.819)

Yeah.

John Jantsch (08:40.679)

types of content that used to generate traffic has certainly gone away. We can debate whether or not it was that valuable anyway. But how are you evolving your model when you think about SEO practices?

Brent At E2M (08:54.414)

I mean, if there’s a piece of content that you could easily just ask, you know, chat, and you would get a great answer for that content, if that’s what you’re gonna be putting on your client’s website to help them grow their rank or grow their traffic from the LLMs and things like that, I mean, that’s certainly probably not gonna be a great strategy. And I think most people got that memo when they saw how much traffic decreased for…

John Jantsch (08:58.781)

Right.

Brent At E2M (09:19.566)

Those kind of common things right like how do I make great guacamole? Right is just going to be the user experience to ask that question on On chat is going to be far better than on a Google browser unless somebody has some type of proprietary ingredient or approach or they can actually build some intellectual property around it and kind of protect that unique recipe right in that case or they have a really great personality around

John Jantsch (09:36.199)

Yeah, right.

John Jantsch (09:42.727)

Mm-hmm.

Brent At E2M (09:47.054)

teaching people how to make guacamole or whatever, and they’re a great YouTuber, and there’s some type of thing that’s unique that AI cannot replicate that they can bring to the marketplace. So I do think marketers have to be a little bit more creative. There’s kind of a reinvention that’s going on. That being said, there’s also a ton of people that are now using LLMs to search for business recommendations, to search for services. And certainly there’s…

John Jantsch (10:09.307)

Yeah, 100%.

Brent At E2M (10:13.248)

a whole cottage industry. our, amount of SEO business that we’ve had has, has categorically gone up year over year, right? And that’s kind of in the post AI world. And a big thing on the, on the E2M team is how are we optimizing our clients’ websites and search strategy for the LLMs, right? Kind of the AEO strategies while we’re also using, you know, continuing to invest in Google. And I think Google, for instance, I I don’t know if search traffic, you know, the

John Jantsch (10:15.035)

the

John Jantsch (10:21.661)

Mm-hmm.

Brent At E2M (10:40.738)

The Google usage has necessarily gone down. mean, I think they’re still driving a lot of traffic to businesses. Google Local, massively important for businesses to be active, right? Especially if they’re a service business or they’re working locally or regionally, right? That’s something that I think the LLMs aren’t doing nearly as well as Google. And so there’s certainly still lots of blue oceans, I think, on the SEO side.

John Jantsch (10:47.132)

Mm-hmm.

Yeah.

John Jantsch (11:02.119)

Yeah.

John Jantsch (11:05.853)

Yeah, and I think you’re 100 % right. mean, I’m speaking in a conference next month of all remodeling contractors. And they’re all talking about like, what do need to be doing to change? And it’s like, hey, you know, show up in that map pack, do a better job of your reviews, you know, that, you know, answer, you know, have a lot of FAQs, you know, that answer questions that people would have about remodeling, because that the trust, I think, with the map pack, whether it’s deserved or not, you know, that the consumer has is going to, I think, give that

a life for a long time. And I don’t think you’re going to see the AI overviews for somebody that’s, I mean, you can go there and say, you know, in AI mode, you can say, what’s the best for modeling contractor in this town. And it will give you an opinion. But I think people still value the map pack and the proximity and all the things that come with it.

Brent At E2M (11:55.086)

You know, one thing I think there was this like undertone of like, is AI going to make agencies like obsolete or something? And, and it’s weird because like I’ve asked this to a lot of agency groups have been like, okay, in the post AI world, like who here is working less, right? And I asked people like raise their hands and like, nobody raised their hand. I’m like, okay, the, the right, they’re working more. They’re working way more. Right. It is weird. It’s a weird paradigm, right? Because you would think, Hey, artificial intelligence, the computer is going to do all the work for me.

John Jantsch (12:00.955)

Yeah, yeah, you bet.

John Jantsch (12:11.133)

It’s more. They’re all working more, in fact.

Brent At E2M (12:24.718)

You know, it’s like the logical outcome of that would be that we wouldn’t have to work as hard. And, you know, even though these tools do magical things, I don’t think they make finding leads and customers any easier. If anything, they’re just making it even more competitive. They’re giving advanced marketing tools to a lot larger group of people. And so it’s getting more competitive out there. And that means that businesses…

John Jantsch (12:42.993)

Yeah.

Brent At E2M (12:52.51)

still need agencies and specialists now more than ever. In fact, they need specialists that are specialized in specializations instead of specializations. And that was the other thing I was going to kind of bring as a theme is, is looked like knowing who your customer is, having niche expertise, you know, really knowing your market backwards and forwards, knowing your market better than your clients know their industry, I think is now more important than ever. You know, the idea of just being a general

John Jantsch (13:15.783)

Mm-hmm.

Brent At E2M (13:21.432)

Hey, I’m Brent, I’m the web guy, right? Like I don’t think that’s gonna fly in 2026 and beyond if it is even flying right now.

John Jantsch (13:24.401)

Yeah, right.

John Jantsch (13:31.293)

So let’s talk about the human element. think a lot of people are wringing their hands around. mean, every time you see these headlines that, you know, Microsoft says are 40 % of all jobs will go away by, you know, the next three years. I think you have a lot of people kind of wringing their hands around about like, is this going to destroy the world? know, if people, 40 % of people are out of jobs. How are…

I tell you what we see is I see a lot of people that are working more, as you said, and a lot of it’s because the consumer or the business owner behavior has changed a little bit in that they expect more. So that’s part of it. our mantra has always been strategy before tactics. We actually feel that if you develop a great marketing strategy, marketing becomes less complicated, but far more effective.

and, and so, you know, what I see is a whole lot of agencies that were, have always been delivering tactics are now just using AI to deliver a new set of tactics. and not still not thinking strategically. I think what Mark, what businesses are going to need in the future is marketing leadership and not marketing doers.

Brent At E2M (14:48.78)

Yeah. Yeah. mean, I think we’re running this event. It’s all about AI for agency owners. And I promise this is not just a direct plug for our event, but obviously it’s my duty to promote our event right now. But one of our attendees, and he’s kind of an AI first person, he registered for our event, right? Went in and purchased a ticket.

John Jantsch (14:50.845)

I should have posed that as a question, but it was really more of a statement.

Brent At E2M (15:17.166)

All using his chat GPT agent. So he literally just you know driving in the car Said hey Go buy a ticket to Vistara and his agent, know went and it takes screenshots and says hey This is what I’m doing along the way, but you know, he just kind of had to say yep Yep in the agent already has all of his information and it has all of the information that it needs in order to make that fill out the web forums and actually purchase a product and

It’s, it’s almost like every business from that perspective, every business’s website just became kind of the Amazon one click shopping experience. You know, if, we fast forward two, three years, if we all have these agents that have, you know, secure access to our banking details and to our PII and they know kind of our preferences. mean, what could you do if you can line up an experience that you know is going to meet the needs of a specific target audience? you know, you have.

very few barriers standing between you and them making a transaction, becoming a customer. And I think in that way, AI is going to be disrupting some of these workflows, some of these user interfaces, just as web professionals and digital agencies, how we view creating those experiences. And so I think that while AI is certainly going to destroy some number of jobs, whether it’s 40%, 20%, I don’t know what the number is.

John Jantsch (16:40.976)

This is…

Brent At E2M (16:42.646)

I trust Microsoft, they’re now worth $4 trillion. They must be doing something right. But I think for every job that it destroys, there’s going to be new jobs created. You know, if I was, if I lived in a Waymo city, I was just out in California. I rode in my first driverless taxi. Would I still have a car? I don’t know because I’m like, well, what if I just used Waymo to get everywhere I needed to go? And when I’m in the car, I’m going to work on

John Jantsch (16:45.309)

You

John Jantsch (16:52.625)

Yeah, I’m seeing that already, right.

Brent At E2M (17:10.094)

I’m going to be productive. I’m going to do work. I’m going to be calling, I’ll call 10 more agencies to see if they can come to my event. Right. So I do think that there’s like some things that are not super exciting in terms of jobs in the marketplace right now that likely are going to go away, right? Like data entry, data harvesting from the internet, you know, content editors, right? Like I can get a lot of my content. can dictate it to chat and

John Jantsch (17:28.943)

Mm-hmm. Just basic research. Yeah.

Brent At E2M (17:38.52)

gives me pretty good content. It even gives me some suggestions on how to evolve it and gives me different, know, hey, here’s a version for LinkedIn. Here’s a version for Facebook. Here’s a version for Instagram, right? Here’s a script that you can go and put, take a video and record, right? So things that I would have relied on three, four or five people before I can get done myself. And a lot of times I don’t think I was necessarily hiring those people. I just wasn’t doing it, right? I’d post maybe on one platform instead of four and I wouldn’t hire a social media person to do that.

John Jantsch (17:45.917)

All right. Yep.

Brent At E2M (18:07.022)

And so I think that some of these things are certainly going to destroy jobs, but you know, like what will my kids’ jobs be in 15, 20 years? I have no idea, but they’ll have something to do, I’m sure.

John Jantsch (18:07.867)

Yes.

John Jantsch (18:17.841)

Yeah. Yeah. I’m even seeing that in our organization. you know, people that, that really were good doers, good implementers, you know, we’re really pushing them to, know, they have to be more, they, whether they have employees under them or not, they have to really think more like leaders and think more like managers, who are going to optimize, you know, some of these tools, as opposed to, you know, writing the, every bit of social media content, they’re going to be, you know,

looked more as managers. And I think that that from a skill set standpoint, that’s, that’s probably not everybody’s sweet spot. I mean, there definitely are people that are just very good at give me an SOP and I’ll follow it. But I do think that from a career standpoint, you know, if you’re one of those people, you probably need to really start looking at how do I, know, how do I, my strategic thinking, my EQ skills, you know, over and above, you know, being able to, to manage a spreadsheet.

Brent At E2M (19:15.79)

And sometimes, you know, not to like be like capitalist or whatever, but like, think at some point, right, if people aren’t willing to move towards that opportunity voluntarily, they will, you know, maybe have to earn some, learn some hard lessons. And in those, some of those might be expensive lessons. think certainly as an entrepreneur, I’ve had to learn some expensive lessons when I didn’t pivot hard enough or, you know, change my business or change my mindset fast enough. And then you…

John Jantsch (19:32.327)

Yeah, Yeah.

Brent At E2M (19:43.51)

And then you’re like, was a hard lesson. I’m never going to do that again. So I think, you know, people don’t evolve right now and they don’t invest time. Like again, we’re doing these AI first Saturdays as a team. And at first people were like, I to come in on Saturday. we’re like, Hey, look, this is the reality. Like we’re all working an extra, an extra day right now to make sure that we can properly retool and learn enough about this tech because during the week, we’re all very focused on client work. We’re focusing on.

John Jantsch (19:46.097)

Right, Yeah.

John Jantsch (19:55.121)

Yeah, right.

Brent At E2M (20:11.758)

know, doing the day-to-day business. so, you know, we’ve, we’ve made that as a priority all the way up to our leadership team. Right. I mean, I’m waking up at four o’clock every Saturday, joining, joining the team and you know, working on AI stuff.

John Jantsch (20:25.713)

Well, Brent, we’ve frittered away a perfectly good 20 minutes here trying to help people talk people off the cliff a little bit. Is there some place you’d invite people to learn more about e2m’s solutions?

Brent At E2M (20:39.34)

Yeah, you can definitely check us out at e2msolutions.com. You can always email me brent at e2msolutions.com. We’re running an event called Vistara at the end of September, depending on when this episode airs, join vistara.com. So we’re doing two full days on artificial intelligence and agency growth. So, I mean, we’re, think this is such an important topic. We’re running a full two day event on this in Denver, Colorado. So if you’re interested, certainly reach out.

John Jantsch (21:01.319)

Yes.

John Jantsch (21:06.791)

Well, I’m just down the road. should probably come down and speak at the event. Well, let’s make, let’s, let’s make it happen then. All right. Awesome. Well, again, I appreciate you stopping by and hopefully we’ll see you soon out there on the road.

Brent At E2M (21:10.606)

I think so. I think so. should. Let’s make it happen.

Brent At E2M (21:20.942)

Thanks, John.

powered by

Alien: Earth’s Eyeball Monster Taps Into Primal Fears

This article contains spoilers for Alien: Earth episode 4. No Alien project can ever recapture the sense of cosmic horror upon seeing H.R. Giger’s xenomorph for the first time. So when it came time to design a coterie of fresh monsters for prequel series Alien: Earth, creator Noah Hawley went for the eyes … literally. […]

The post Alien: Earth’s Eyeball Monster Taps Into Primal Fears appeared first on Den of Geek.

Nintendo’s breakout title was not Mario Bros. or The Legend of Zelda, but rather Donkey Kong, with the eponymous ape headlining the company’s 1981 arcade game that helped Nintendo establish a prominent foothold in the American gaming industry. Since then Donkey Kong has become a fixture for the company, either as a supporting character for Mario-led ensemble games or his own line of starring titles across Nintendo’s many home and handheld consoles. This has continued into the company’s fledgling Nintendo Switch 2 era, with the console’s Donkey Kong Bananza the most positively buzzed-about title from the Switch 2’s launch library.

Donkey Kong has a longstanding history within the video game industry, though the number of games that he primarily stars in aren’t quite as prolific as some of his other Nintendo counterparts. With that said, Donkey Kong has starred in at least one game on virtually every major Nintendo system and remains a cornerstone property for the company. Here are the top 10 best Donkey Kong games ranked, not counting his supporting character ensemble appearances.

cnx.cmd.push(function() {
cnx({
playerId: “106e33c0-3911-473c-b599-b1426db57530”,

}).render(“0270c398a82f44f49c23c16122516796”);
});

10. Donkey Kong 64 (1999)

We’re starting off this list with a relatively divisive entry, as many have not inaccurately derided 1999’s Donkey Kong 64 as an overstuffed collect-a-thon on the Nintendo 64. To this common criticism’s credit, the game does have you replay many of the same levels with different characters—something 2004’s Super Mario 64 DS also did without as much backlash—but that overlooks the point. Indeed, not only catapulting Donkey Kong and his friends into the world of 3D platforming, Donkey Kong 64 rose above its contemporaries in the genre.

There is an under-appreciated depth to Donkey Kong 64, particularly in its rich level design and atmospheric musical score composed by Grant Kirkhope. What a lot of retrospective reviews also don’t take into account is that there was a ton of 3D platforming slop flooding the market after the success of Super Mario 64 and Banjo-Kazooie, slop that Donkey Kong 64 clearly stood a cut above during its release. Certainly not without its flaws, Donkey Kong 64 deserves far more love than it gets these days, or at least a less dubious reputation.

9. Donkey Kong Jungle Beat (2004)

Nintendo home consoles have a longstanding legacy of quirky peripherals, a tradition that was continued by the Nintendo GameCube at the dawn of the 21st century. After introducing the DK Bongos, a bongo drum peripheral for the GameCube and 2003’s Donkey Konga, the peripheral was given more optimized gameplay experience in 2004’s Donkey Kong Jungle Beat. The game continues the rhythm-based gameplay from Donkey Konga, albeit within a side-scrolling platformer controlled by the drums.

One of the most accessible Donkey Kong games in terms of difficulty, Jungle Beat takes the DK Bongos to their logical apex in usage. More than just providing a unique mechanic in moving across the levels, there is something fundamentally cathartic about pounding on a set of bongos to pummel an imposing boss. A forgotten entry in the Donkey Kong franchise because of its signature peripheral, Donkey Kong Jungle Beat should receive an update given the possibilities through Nintendo’s continued motion control support.

8. Mario vs. Donkey Kong (2004)

While Mario and Donkey Kong may be nominal buddies now, palling around throughout various Nintendo sports titles, Mario Kart, and Mario Party, there was a time when they were bitter rivals. That antagonistic history is revisited in 2004’s Mario vs. Donkey Kong for the Game Boy Advance, the spiritual successor to 1994’s Donkey Kong on the original Game Boy. The game’s offbeat story has Mario own and run his own toy factory, which is raided by Donkey Kong for the factory’s popular line of wind-up Mini-Mario figures.

Mario vs. Donkey Kong ups the ante from the puzzle-solving gameplay and traversal present in the 1994 Game Boy game, adding a new wrinkle with the presence of the Mini-Marios which have to be navigated to safety. The game received a surprise remake on the Nintendo Switch in 2024, making the controls more intuitive to the modern console while significantly upping the technical presentation. While we’re certainly happy that Mario and Donkey Kong play nice again these days, Mario vs. Donkey Kong is a fresh take on one of gaming’s oldest beefs.

7. Donkey Kong Country Returns (2010)

After completely revitalizing the Metroid franchise, developer Retro Studios did the same for Donkey Kong with 2010’s Donkey Kong Country Returns for the Wii. Returning to the side-scrolling platforming that made Rare’s original Donkey Kong Country trilogy such a major success on the Super Nintendo, the game has Donkey and Diddy Kong take on a new villain, the Tiki Tak Tribe. In a change-up from previous titles in the series, the game also allows for two-player simultaneous co-op, with the second player controlling Diddy.

Donkey Kong Country Returns is a welcome, hurm, return to form for the franchise, although not without a few bumps in its execution. Boasting a faster pace than the original trilogy, the Wii revival is markedly more difficult than many games in the series, something that highlights how frustrating the console’s motion controls can be. The game received slightly enhanced remasters on the Nintendo 3DS and Switch, upgrading the visual presentation, though its gameplay flaws are still present and these remasters don’t add all that much to the overall experience.

6. Donkey Kong Country 3: Dixie Kong’s Double Trouble! (1996)

At the twilight of the Super Nintendo’s lifecycle, and after the Nintendo 64 had already launched, Rare released one final Donkey Kong Country game for the SNES: 1996’s Donkey Kong Country 3: Dixie Kong’s Double Trouble! Just as Donkey Kong Country 2 ditched Donkey Kong himself, its sequel also discards Diddy, replacing him with the new playable character Kiddy Kong. Together, Dixie and Kiddy set out to rescue Donkey and Diddy from Baron K. Rool in a northern region with its geography inspired by Scandinavia and Canada.

Though Donkey Kong Country 3 may be the weakest entry of the original trilogy, it is by no means a bad game and brings some fresh changes to the series, specifically, an open hub map and vehicles. But the game just doesn’t quite feel as inspired as its two predecessors, even with the protagonist swap to introduce Kiddy. A solid if seemingly obligatory coda to the original Donkey Kong Country trilogy, Donkey Kong Country 3 retains the core appeal as it goes through well-worn territory.

5. Donkey Kong Country: Tropical Freeze (2014)

Relegated to being a supporting character for years, Cranky Kong makes his playable debut with 2014’s Donkey Kong Country: Tropical Freeze for the Wii U, joined by a returning Dixie. The game has Donkey Kong’s birthday party interrupted by the attacking Snowmads, an army of arctic animals plotting to conquer Donkey Kong Island and plunge it into endless winter. The Kongs battle their way back to the heart of their native island, defeat the Snowmads and defrost their home from its newly icy condition.

Tropical Freeze is an all-around improvement over Donkey Kong Country Returns, significantly refining the gameplay mechanics and level design. At the same time, the difficulty remains as high as ever while the overall number of levels is reduced from its predecessor. A solid addition to the series, the definitive Tropical Freeze experience is its enhanced remaster on the Nintendo Switch, adding Funky Kong as a playable character.

4. Donkey Kong Country (1994)

To anyone who was around and playing video games in 1994, the original Donkey Kong Country was a huge deal when it debuted on the Super Nintendo that year. With its crisp, pre-rendered graphics, there was nothing else that looked that good on the console market at the time, helping steer the industry towards more 3D aesthetics. The game’s story is simple: King K. Rool raids Donkey Kong Island and steals Donkey Kong’s vast stash of bananas, prompting the ape and his nephew Diddy, in his first appearance, to recover their purloined fruit.

If it was just about that initial wow factor from its pre-rendered presentation, we wouldn’t still be talking about Donkey Kong Country over 30 years after its launch. But more than just its eye-catching visuals, the game completely redefined Donkey Kong down to his character design and reestablished him as a core Nintendo property. That legacy stems from a combination of impressive technical presentation, an instant-classic score composed primarily by Grant Kirkhope, and an engaging side-scrolling platformer experience that this first game all brings to the table.

3. Donkey Kong (1994)

Initially 1994’s Donkey Kong on the Game Boy looks and feels like a smoother, more intuitive port of the classic 1981 arcade game of the same name. However, after completing the four levels from the arcade title, the 1994 game expands into a full-on adventure as Donkey Kong kidnaps Mario’s girlfriend Pauline again and hightails it across 97 additional levels in nine-themed worlds. Joining Donkey Kong in trying to stay one step ahead of Mario is Donkey Kong Jr. while the mustached plumber gains a new set of moves to keep up with the apes.

The Game Boy Donkey Kong is a love letter to both the original arcade and its 1982 sequel Donkey Kong Jr. while ambitiously building upon these foundations. At its core, the game is more of a puzzle-solving experience than a platforming one, with that distinction more evident in the level design as players progress. That helps elevate 1994’s Donkey Kong tremendously from similar games in the franchise and a refreshing twist on a familiar premise.

2. Donkey Kong Country 2: Diddy’s Kong Quest (1995)

If Donkey Kong Country revolutionized the way we looked at side-scrolling platformers, its 1995 sequel Donkey Kong Country 2: Diddy’s Kong Quest used that as a springboard to make the ultimate side-scrolling Donkey Kong experience. As the title suggests, Diddy steps up as the protagonist, teaming up with his newly introduced girlfriend Dixie Kong to rescue Donkey from Kaptain K. Rool. The duo travel to the pirate warlord’s hideout on Crocodile Isle where they each use their unique abilities to traverse 52 levels and rescue Diddy’s uncle.

Donkey Kong Country 2 is one of those cases where bigger does actually mean better, with more secrets, more animal buddies to temporarily control, and more enemy types to take down. The level design is more ambitious and just how differently Diddy and Dixie each play make for a much deeper and richer experience than its predecessor. An all-around improvement over the original Donkey Kong Country, the sequel stands as the apex of side-scrollers and one of the best games that developer Rare ever made.

1. Donkey Kong Bananza (2025)

Released a full month after the Nintendo Switch 2’s launch, Donkey Kong Bananza became the most buzzed-about game from the console’s library, even over Mario Kart World. This praise is well-earned, with Donkey Kong Bananza successfully catapulting its heroic ape into a full-on 3D platformer experience built around the character’s notable strength by placing him in destructible environments. The game has Donkey Kong travel to Ingot Isle to harvest Banandium Gems, teaming up with a teenage Pauline against the sinister VoidCo mining company eager to obtain the Banandium Root, no matter the cost to the environment.

From its immersive level design, side-scrolling detours straight out of the original Donkey Kong Country in a fun and emotional tribute, and a redesigned Donkey Kong character design with plenty of personality, Donkey Kong Bananza repositions DKas a marquee Nintendo franchise. But for all its celebration of the entire history of all things Donkey Kong, Bananza is just a lot of fun, with its key gameplay mechanic of literally tearing through environments being incredibly cathartic. This game lets players go nuts and bash and smash everything around them. A subtle reinvention of what’s possible for a Donkey Kong game, Donkey Kong Bananza is Nintendo’s best 3D platformer since at least 2017’s Super Mario Odyssey, and the most unabashedly fun Donkey Kong game in, well, ever.

The post Donkey Kong Video Games Ranked from Fun to Absolute Bananas appeared first on Den of Geek.

Does Star Trek Still Need to Be a Movie Franchise? The Eternal Debate

There are as we write this some 947 episodes of Star Trek content available to watch, stretching across Star Trek: The Original Series to Star Trek: Strange New Worlds, totaling something like 820 hours of viewing. There are also 13 theatrical Star Trek movies in circulation, plus one direct-to-streaming movie, which rack up another roughly […]

The post Does Star Trek Still Need to Be a Movie Franchise? The Eternal Debate appeared first on Den of Geek.

Nintendo’s breakout title was not Mario Bros. or The Legend of Zelda, but rather Donkey Kong, with the eponymous ape headlining the company’s 1981 arcade game that helped Nintendo establish a prominent foothold in the American gaming industry. Since then Donkey Kong has become a fixture for the company, either as a supporting character for Mario-led ensemble games or his own line of starring titles across Nintendo’s many home and handheld consoles. This has continued into the company’s fledgling Nintendo Switch 2 era, with the console’s Donkey Kong Bananza the most positively buzzed-about title from the Switch 2’s launch library.

Donkey Kong has a longstanding history within the video game industry, though the number of games that he primarily stars in aren’t quite as prolific as some of his other Nintendo counterparts. With that said, Donkey Kong has starred in at least one game on virtually every major Nintendo system and remains a cornerstone property for the company. Here are the top 10 best Donkey Kong games ranked, not counting his supporting character ensemble appearances.

cnx.cmd.push(function() {
cnx({
playerId: “106e33c0-3911-473c-b599-b1426db57530”,

}).render(“0270c398a82f44f49c23c16122516796”);
});

10. Donkey Kong 64 (1999)

We’re starting off this list with a relatively divisive entry, as many have not inaccurately derided 1999’s Donkey Kong 64 as an overstuffed collect-a-thon on the Nintendo 64. To this common criticism’s credit, the game does have you replay many of the same levels with different characters—something 2004’s Super Mario 64 DS also did without as much backlash—but that overlooks the point. Indeed, not only catapulting Donkey Kong and his friends into the world of 3D platforming, Donkey Kong 64 rose above its contemporaries in the genre.

There is an under-appreciated depth to Donkey Kong 64, particularly in its rich level design and atmospheric musical score composed by Grant Kirkhope. What a lot of retrospective reviews also don’t take into account is that there was a ton of 3D platforming slop flooding the market after the success of Super Mario 64 and Banjo-Kazooie, slop that Donkey Kong 64 clearly stood a cut above during its release. Certainly not without its flaws, Donkey Kong 64 deserves far more love than it gets these days, or at least a less dubious reputation.

9. Donkey Kong Jungle Beat (2004)

Nintendo home consoles have a longstanding legacy of quirky peripherals, a tradition that was continued by the Nintendo GameCube at the dawn of the 21st century. After introducing the DK Bongos, a bongo drum peripheral for the GameCube and 2003’s Donkey Konga, the peripheral was given more optimized gameplay experience in 2004’s Donkey Kong Jungle Beat. The game continues the rhythm-based gameplay from Donkey Konga, albeit within a side-scrolling platformer controlled by the drums.

One of the most accessible Donkey Kong games in terms of difficulty, Jungle Beat takes the DK Bongos to their logical apex in usage. More than just providing a unique mechanic in moving across the levels, there is something fundamentally cathartic about pounding on a set of bongos to pummel an imposing boss. A forgotten entry in the Donkey Kong franchise because of its signature peripheral, Donkey Kong Jungle Beat should receive an update given the possibilities through Nintendo’s continued motion control support.

8. Mario vs. Donkey Kong (2004)

While Mario and Donkey Kong may be nominal buddies now, palling around throughout various Nintendo sports titles, Mario Kart, and Mario Party, there was a time when they were bitter rivals. That antagonistic history is revisited in 2004’s Mario vs. Donkey Kong for the Game Boy Advance, the spiritual successor to 1994’s Donkey Kong on the original Game Boy. The game’s offbeat story has Mario own and run his own toy factory, which is raided by Donkey Kong for the factory’s popular line of wind-up Mini-Mario figures.

Mario vs. Donkey Kong ups the ante from the puzzle-solving gameplay and traversal present in the 1994 Game Boy game, adding a new wrinkle with the presence of the Mini-Marios which have to be navigated to safety. The game received a surprise remake on the Nintendo Switch in 2024, making the controls more intuitive to the modern console while significantly upping the technical presentation. While we’re certainly happy that Mario and Donkey Kong play nice again these days, Mario vs. Donkey Kong is a fresh take on one of gaming’s oldest beefs.

7. Donkey Kong Country Returns (2010)

After completely revitalizing the Metroid franchise, developer Retro Studios did the same for Donkey Kong with 2010’s Donkey Kong Country Returns for the Wii. Returning to the side-scrolling platforming that made Rare’s original Donkey Kong Country trilogy such a major success on the Super Nintendo, the game has Donkey and Diddy Kong take on a new villain, the Tiki Tak Tribe. In a change-up from previous titles in the series, the game also allows for two-player simultaneous co-op, with the second player controlling Diddy.

Donkey Kong Country Returns is a welcome, hurm, return to form for the franchise, although not without a few bumps in its execution. Boasting a faster pace than the original trilogy, the Wii revival is markedly more difficult than many games in the series, something that highlights how frustrating the console’s motion controls can be. The game received slightly enhanced remasters on the Nintendo 3DS and Switch, upgrading the visual presentation, though its gameplay flaws are still present and these remasters don’t add all that much to the overall experience.

6. Donkey Kong Country 3: Dixie Kong’s Double Trouble! (1996)

At the twilight of the Super Nintendo’s lifecycle, and after the Nintendo 64 had already launched, Rare released one final Donkey Kong Country game for the SNES: 1996’s Donkey Kong Country 3: Dixie Kong’s Double Trouble! Just as Donkey Kong Country 2 ditched Donkey Kong himself, its sequel also discards Diddy, replacing him with the new playable character Kiddy Kong. Together, Dixie and Kiddy set out to rescue Donkey and Diddy from Baron K. Rool in a northern region with its geography inspired by Scandinavia and Canada.

Though Donkey Kong Country 3 may be the weakest entry of the original trilogy, it is by no means a bad game and brings some fresh changes to the series, specifically, an open hub map and vehicles. But the game just doesn’t quite feel as inspired as its two predecessors, even with the protagonist swap to introduce Kiddy. A solid if seemingly obligatory coda to the original Donkey Kong Country trilogy, Donkey Kong Country 3 retains the core appeal as it goes through well-worn territory.

5. Donkey Kong Country: Tropical Freeze (2014)

Relegated to being a supporting character for years, Cranky Kong makes his playable debut with 2014’s Donkey Kong Country: Tropical Freeze for the Wii U, joined by a returning Dixie. The game has Donkey Kong’s birthday party interrupted by the attacking Snowmads, an army of arctic animals plotting to conquer Donkey Kong Island and plunge it into endless winter. The Kongs battle their way back to the heart of their native island, defeat the Snowmads and defrost their home from its newly icy condition.

Tropical Freeze is an all-around improvement over Donkey Kong Country Returns, significantly refining the gameplay mechanics and level design. At the same time, the difficulty remains as high as ever while the overall number of levels is reduced from its predecessor. A solid addition to the series, the definitive Tropical Freeze experience is its enhanced remaster on the Nintendo Switch, adding Funky Kong as a playable character.

4. Donkey Kong Country (1994)

To anyone who was around and playing video games in 1994, the original Donkey Kong Country was a huge deal when it debuted on the Super Nintendo that year. With its crisp, pre-rendered graphics, there was nothing else that looked that good on the console market at the time, helping steer the industry towards more 3D aesthetics. The game’s story is simple: King K. Rool raids Donkey Kong Island and steals Donkey Kong’s vast stash of bananas, prompting the ape and his nephew Diddy, in his first appearance, to recover their purloined fruit.

If it was just about that initial wow factor from its pre-rendered presentation, we wouldn’t still be talking about Donkey Kong Country over 30 years after its launch. But more than just its eye-catching visuals, the game completely redefined Donkey Kong down to his character design and reestablished him as a core Nintendo property. That legacy stems from a combination of impressive technical presentation, an instant-classic score composed primarily by Grant Kirkhope, and an engaging side-scrolling platformer experience that this first game all brings to the table.

3. Donkey Kong (1994)

Initially 1994’s Donkey Kong on the Game Boy looks and feels like a smoother, more intuitive port of the classic 1981 arcade game of the same name. However, after completing the four levels from the arcade title, the 1994 game expands into a full-on adventure as Donkey Kong kidnaps Mario’s girlfriend Pauline again and hightails it across 97 additional levels in nine-themed worlds. Joining Donkey Kong in trying to stay one step ahead of Mario is Donkey Kong Jr. while the mustached plumber gains a new set of moves to keep up with the apes.

The Game Boy Donkey Kong is a love letter to both the original arcade and its 1982 sequel Donkey Kong Jr. while ambitiously building upon these foundations. At its core, the game is more of a puzzle-solving experience than a platforming one, with that distinction more evident in the level design as players progress. That helps elevate 1994’s Donkey Kong tremendously from similar games in the franchise and a refreshing twist on a familiar premise.

2. Donkey Kong Country 2: Diddy’s Kong Quest (1995)

If Donkey Kong Country revolutionized the way we looked at side-scrolling platformers, its 1995 sequel Donkey Kong Country 2: Diddy’s Kong Quest used that as a springboard to make the ultimate side-scrolling Donkey Kong experience. As the title suggests, Diddy steps up as the protagonist, teaming up with his newly introduced girlfriend Dixie Kong to rescue Donkey from Kaptain K. Rool. The duo travel to the pirate warlord’s hideout on Crocodile Isle where they each use their unique abilities to traverse 52 levels and rescue Diddy’s uncle.

Donkey Kong Country 2 is one of those cases where bigger does actually mean better, with more secrets, more animal buddies to temporarily control, and more enemy types to take down. The level design is more ambitious and just how differently Diddy and Dixie each play make for a much deeper and richer experience than its predecessor. An all-around improvement over the original Donkey Kong Country, the sequel stands as the apex of side-scrollers and one of the best games that developer Rare ever made.

1. Donkey Kong Bananza (2025)

Released a full month after the Nintendo Switch 2’s launch, Donkey Kong Bananza became the most buzzed-about game from the console’s library, even over Mario Kart World. This praise is well-earned, with Donkey Kong Bananza successfully catapulting its heroic ape into a full-on 3D platformer experience built around the character’s notable strength by placing him in destructible environments. The game has Donkey Kong travel to Ingot Isle to harvest Banandium Gems, teaming up with a teenage Pauline against the sinister VoidCo mining company eager to obtain the Banandium Root, no matter the cost to the environment.

From its immersive level design, side-scrolling detours straight out of the original Donkey Kong Country in a fun and emotional tribute, and a redesigned Donkey Kong character design with plenty of personality, Donkey Kong Bananza repositions DKas a marquee Nintendo franchise. But for all its celebration of the entire history of all things Donkey Kong, Bananza is just a lot of fun, with its key gameplay mechanic of literally tearing through environments being incredibly cathartic. This game lets players go nuts and bash and smash everything around them. A subtle reinvention of what’s possible for a Donkey Kong game, Donkey Kong Bananza is Nintendo’s best 3D platformer since at least 2017’s Super Mario Odyssey, and the most unabashedly fun Donkey Kong game in, well, ever.

The post Donkey Kong Video Games Ranked from Fun to Absolute Bananas appeared first on Den of Geek.