An Army of Agents
An Army of Agents

An Army of Agents

Tags
Published
May 7, 2024
An approachable introduction to AI agents and why they’re likely to be the most impactful technology of our generation.
 

Inspiration

In the near future, you will have your own personal army of AI agents that are always-on, doing work in the background on your behalf – 24/7. Every time you log in to your computer, you’ll have access to dozens of these agents, and you’ll orchestrate a small army of them like you’re conducting a digital symphony.
This future is definitely coming – it’s more of a question of when, not if. And those of us who’re prepared for it ahead of time will be significantly better off compared to those of us who aren’t.
So what the hell is an AI agent? And what will these things actually be capable of?
You’re probably already familiar with the concept of self-driving cars. You can think of AI agents as self-driving computer programs. With AI agents, instead of having a human programmer who needs to explicitly write out what happens in every scenario, you replace or augment the human programmer with AI and have the program drive itself. We call these self-driving programs AI agents.
You give an agent a task, you give it some resources to accomplish that task, and it will go out into the digital world to try and accomplish that task. That’s just one agent, though. Now imagine a swarm of a thousand agents at your command, all buzzing around in the background, coordinating with each other, and doing all sorts of “work”. Hell, this idea might even sound familiar to you if you’ve ever been a manager before, just with teams of AI workers instead of human workers.
It’s still very early days for AI agents, however, and agents currently struggle a lot with reliability and generality, but one point I can’t stress enough is that glimpses of this future are already here, they’re just not evenly distributed yet. And I believe it’s incredibly important for us to try and understand where the implications.
What might the UX look like for controlling distributed swarms of AI agents? (Midjourney v5.2)
What might the UX look like for controlling distributed swarms of AI agents? (Midjourney v5.2)
 

Context

As Sam Altman recently laid out, Agentic adoption will start off slowly at first, with our chatbots becoming more capable and more reliable. Soon, it’ll feel like you have your own personal chief of staff to rely on. Next, your personal AI will slowly transition to a team of more specialized agents, some of which will be working in the background on your behalf. Over time, the line between human remote workers and AI agents will begin to blur. And at some point, you will inevitably stop noticing or caring if these agents are human or AI. The ultimate Turing test.
Reliable AI agents will affect everything from how we consume the news to how we conduct research to how we socialize to how we teach our children to how we perform healthcare to how we work to how we play and everything in-between. Oh, and it will absolutely impact how we wage war... Entire industries will be disrupted over relatively short periods of time – just like what happened during the .com boom or the invention of the printing press. All knowledge work, e.g., all work that is done in front of a computer, is going through a fundamental revolution. Satya Nadella, the CEO of Microsoft, said in a recent interview that we’re going through the equivalent of the industrial revolution but for knowledge workers. And the craziest part is that we’re still only just scratching the surface of understanding how this will impact our society as a whole.
Now, you might be saying that sounds wonderful and all, but what will this future actually look like in practice? And how do we get from our current AI tools like ChatGPT to this agentic, cyberpunk future of tomorrow?
Well, the first thing I’ll say is that nobody really knows the answer to this for sure. While we’re currently in the middle of an unprecedented wave of AI progress, in the past we’ve seen AI go through multiple ups and downs via so-called AI winters. It’s entirely possible that the core transformer architecture invented in 2017 that’s at the heart of most AI progress today will reach a local maximum at some point and level off, but we’re definitely not seeing that at the moment.
notion image
Anyone who’s been following tech news knows that the current pace of AI progress is insane, exciting, and concerning all at once. Every day of every week we’re seeing incredible advances both from academia and industry, and for perhaps the first time ever, we have hundreds of millions of mainstream consumers along for the ride, actively using viral apps like ChatGPT and Midjourney and drastically shortening the gap between research and consumer adoption.
I want to be clear on one thing, however; nobody really knows the full extent of AI’s impact on society in the short-to-medium-term, and we can only look to science fiction for ideas on how AI will impact society in the long-term. Whether it’s the most sophisticated AI PhDs or the most powerful CEOs, or the AI safety experts, anybody who tells you with confidence that they know how this is all going to play out is either lying to themselves or lying to you.
 

AI Progress Since ChatGPT

With these caveats in mind, there are some things I can say with confidence. And to do that, I’d like to ground this discussion in what’s possible today and think step-by-step through how the next few years are likely to play out, using ChatGPT as a representative example.
The first version of ChatGPT was released on November 30th, 2022. You asked ChatGPT a question via text, it used that question and any previous conversation history as context, and it generated a text response using an LLM which roughly approximates a lossy compression of the web with some RLHF sprinkled on top. For the first time in history, this enabled normal people to have conversations with an AI model that represented a major inflection point in usability, quality, and consumer adoption.
The next major step was giving LLMs like ChatGPT the ability to use tools. Tool use attempts to directly mitigate the major shortcomings of base LLMs, namely that they don’t have access to up-to-date data, they tend to hallucinate on specifics, and they’re designed to work with a relatively limited set of content types. One of the most powerful ways to use LLMs today is to provide them with a very carefully crafted set of tools and then ask them to accomplish a task while invoking them repeatedly in a loop. When people talk about AI agents ~2023, this is essentially what they boil down to: an LLM with access to tools in a while loop.
Another major step has been giving LLMs the ability to work with multimodal content – e.g., images, video, audio, pdfs, websites, etc in addition to more exotic modalities such as DNA, ultrasound, brain waves, etc. Some of these modalities are currently supported via readonly tools (like the browse tool), while some of these modalities are more natively supported by multimodal LLMs like GPT-4V and Gemini using joint embedding spaces. Most of these modalities, however, still have very limited support in practice.
This brings us to approximately where we’re at today, with a knowledge cutoff date of May, 2024. LLM-powered tools like ChatGPT are slowly becoming more capable and reliable, with access to an early but growing set of tools and multimodal capabilities being two of the most notable improvements. The underlying LLMs are also rapidly improving, with many AI researchers predicting that current scaling laws will hold for some definition of awhile, with the limiting factor being funding for the amount of compute and data you can throw at training them and potentially diminishing returns in terms of marginal intelligence.
In the short-term, the main research and engineering challenges seem pretty clear. How do we build scalable, composable systems on top of LLMs and other AI models? How do we make these systems reliable and intuitive when asking them to accomplish more complicated tasks? What’s the best way to add planning, task decomposition, long-term memory, and feedback loops to these programs (e.g., moving from humans driving the bus to AIs driving the bus)? Will this be possible with existing LLM architectures, or are we missing some key ingredient like search in order to make these more capable, autonomous agents reliable? And how do we ensure that these agents remain safe and aligned with similar decisions that human operators would’ve made, given the same circumstances and abilities?
It might seem like there’re more unknowns than knowns at the moment, but one bright spot that’s starting to crystallize is to think of LLMs as CPUs in a fundamentally new, higher-order computing paradigm. Andrej Karpathy refers to this an LLM OS. From this perspective, programs using LLMs today are roughly analogous to early computer programs being built back in the 1970s. They have very strict resource constraints (RAM, compute power, disk space, etc), the developer tools for building these programs are fairly immature, the best practices are constantly being updated (prompt engineering, eval approaches, tools, etc), and as a result, the programs being built on top of LLMs are currently fairly limited in scope.
Who could’ve imagined back in the 1970s all of the ubiquitous, life-changing tools we’ve built nearly 50 years later – tools like Google search, Wikipedia, smart phones, and social networks?
If we allow ourselves to dream for a moment, to really detach ourselves from any preconceived notions of what these AI agents will be capable of, we can just barely start to comprehend the types of tools that we’ll build (and inevitably take for granted) in the future. If you take nothing else away from this post, consider going for a long walk just to think deeply about what this future might look like. And once you’ve done that, do yourself a favor and pick up a few sci-fi books and then continue this thought experiment with ChatGPT as a conversation partner. 😉
 

Reliable Agents ⇒ AGI

You may already be familiar with the concept of AGI, or Artificial General Intelligence. You can think of AGI as an AI that is roughly as capable as an average human adult.
What’s the difference between AGI and AI agents? Well, reliable, general purpose AI agents are synonymous with AGI for all intents and purposes. AI agents may, however, be more specialized and narrow in nature, with human-level reliability and fidelity requiring generality. Everything I’ve described so far is akin to L2-L3 general-purpose agents in the diagram above, where you can give these agents general tasks and have them go off into the digital world and accomplish those tasks with a quality and reliability on par with the average human adult.
It’s important to keep in mind that these agents will not be perfect – in a similar way that humans aren’t perfect in their ability to accomplish tasks. We take sick days, we make mistakes, we miscommunicate, we get into arguments, etc etc. The way to think about agents in comparison, however, is to contrast their productivity, reliability, and the costs of an average human worker with a comparable AI agent within the fairly limited scope of a particular knowledge worker role.
notion image
The most important point to keep in mind when thinking about AGI and AI agents is that they both live on a spectrum of generality, reliability, and autonomy, and the path + timeline from narrow AI agents of today to the reliable, general AI agents of the future is still very unclear.
There are some concrete predictions we can make, however, with this context in mind.
 

Predictions

I’ve found the following prompt to be particularly useful when thinking about agents:
What cognitive tasks are the most well-resourced people in the world – billionaires and world leaders – already offloading to teams of human assistants? ⇒ These will be the first tasks offloaded to AI agents.
Billionaires and world leaders trade money for time in order to make their lives more efficient. They hire teams of people to help manage their life. These human assistants help with coordinating events, distilling news into highly personalized briefings, handling their finances, coding product ideas, drafting emails / tweets / blog posts / books / speeches for them, and connecting them directly with the right human experts when necessary.
The future is already here for these top 1% of people, it’s just not evenly distributed to the rest of us yet.
Notice that these examples all have one big thing in common: they’re all examples of knowledge work. These tasks are all performed digitally on a computer, and none of them include physical labor. Physical tasks involve robotics (aka embodied agents) which use machines to manipulate the physical world, and embodied agents will come in their own time, but they are fundamentally more difficult than digital agents and are therefore still a bit further out for most use cases. There’s nothing stopping digital AI agents, however, from hiring humans on marketplaces like Fiverr or Upwork to accomplish physical tasks that they’re unable to do alone…
So why can’t we all have access to these types of resources? Well, it turns out it’s crazy expensive to hire teams of people to handle all of these things for you, and for 99% of us, it’s just not an option we’d ever even consider. The demand for these resources is already validated, however; it just hasn’t been economically feasible to offer these services at scale… yet.
Minority Report (2002)
Minority Report (2002)
Let me make a few concrete predictions here.
I believe that in the near future AI agents will democratize access to many of these resources that were previously reserved for the top 1%.
I believe this will empower us to do more ambitious and creative work.
I believe that someday soon, our children will learn 1-on-1 from personalized AI tutors that are as smart and capable as Einstein, Jobs, or Mozart.
I believe that AI agents will routinely hire human contractors as part of their toolset, relying on us heavily for three main types of tasks: embodied, physical tasks (in some sense, humans are extraordinarily resilient, general embodied agents), tasks which involve coordination among many people / organizations (which is difficult for agents because these tasks generally require a high degree of trust to accomplish), and tasks which have a very low threshold for error (such as tasks where moral or ethnical considerations come into play).
And finally, I believe that one person with a thousand agents will be able to compete with entire corporations.
What will the UX look like for coordinating & conducting warms of AI agents? (created with Midjourney v6)
What will the UX look like for coordinating & conducting warms of AI agents? (created with Midjourney v6)
Just imagine that for a second. If you had a thousand agents doing work on your behalf at all times of the day – agents that you could rent out on the fly – agents that you could give very specific instructions to around your life goals, your values, your social accounts, and yes, even your bank account. It will be like having your own team of personal, human assistants whose sole purpose is to proactively work towards making your life “better”. Hopefully…
What I find craziest is that relying on these types of agents will very likely become second nature to us, just like how we treat our phones today as a convenient afterthought – even though the average phone with internet access in 2023 has access to more information than the most powerful person in the world did just 30 years ago (via Elon Musk talking about cyborgs). 🤯
At what point does the slope of technological progress go from being driven by humans to being driven by AGI – with humans being dragged along for the ride? Credit: @anthrupad on twitter
At what point does the slope of technological progress go from being driven by humans to being driven by AGI – with humans being dragged along for the ride? Credit: @anthrupad on twitter
 

Challenges

Now, I’d like to address the elephant in the room directly here. Will there be downsides to this level of automation? Yes. Will there be massive job displacement in the short-term? Without a doubt. I believe, however, that this is already a foregone conclusion. Market pressures and geopolitical pressures mean that there are enormous wheels in motion that imho are simply impossible to stop. Are there serious risks as we inch closer and closer to AGI? Yes. Absolutely. And if AI keeps progressing so rapidly, it could lead to a level of societal upheaval that we haven’t seen since WWII. Or worse…
But I don’t believe that we’re there yet. And I definitely don’t believe that there’s nothing we can do about these risks. Let’s be intellectually honest here: this transformation is already happening, whether we’re prepared for it or not, there’s no stopping it, and it’s happening a lot faster than most of us would’ve predicted even just a few years ago.
There are incredibly important, unsolved challenges ahead of us. How do we make agentic programs reliable? How do we make agents understandable, observable, and interpretable? How do we make sure they remain safe? How do we sandbox them from going off the rails either intentionally or unintentionally? How do we handle auth? What’s the best way to add planning, long-term memory, task decomposition, and world-modeling to agents? And what can we do to ensure that no matter how powerful these agents become, they remain beneficial and aligned with humanity?
AGI political compass visualization by Rob Bensinger (June 2022). Sam Altman has stated that the ideal 2x2 for understanding possible AGI scenarios is short-term vs long-term and slow-takeoff vs fast-takeoff. This visualization replaces slow-takeoff vs fast-takeoff with the expected outcome being good for humanity vs bad for humanity and attempts to plot various AI researcher’s public views along these axes. Note that this is a subjective over-simplification and the specifics are likely out-of-date, though I still find it to be a very useful model for understanding the AGI landscape.
AGI political compass visualization by Rob Bensinger (June 2022). Sam Altman has stated that the ideal 2x2 for understanding possible AGI scenarios is short-term vs long-term and slow-takeoff vs fast-takeoff. This visualization replaces slow-takeoff vs fast-takeoff with the expected outcome being good for humanity vs bad for humanity and attempts to plot various AI researcher’s public views along these axes. Note that this is a subjective over-simplification and the specifics are likely out-of-date, though I still find it to be a very useful model for understanding the AGI landscape.
 

Towards the Future

We don’t have good answers to most of these questions just yet, but I’d like to take some inspiration from Rick Rubin here. Rick is one of the most prolific music producers of the past few decades. He’s worked with many of the world’s top music artists, even though as he likes to point out, he doesn’t play any instruments, he doesn’t sing, and he doesn’t even really know how to do audio mixing. When asked about this apparent conundrum, Rick responded that what he brings to the table is very simply taste. He knows what he likes, he has strong convictions around his taste, he’s very effective at communicating his taste, and artists love working with him because that provides the ultimate leverage.
So what does taste have to do with AI agents? Well, I believe that taste will remain as the ultimate human leverage in a world filled with AI agents. Taste is where humans add the most unique value to any pursuit, and as our relationship with AI assistants continues to deepen, your taste in guiding agents – and your ability to guide them effectively – will be the two most important factors differentiating successful people from the rest of the pack.
I will absolutely be one of these early adopters, pushing the frontier of what’s possible to automate with AI, because I fundamentally believe that the benefits outweigh the risks. I’m an eternal optimist and am incredibly excited to help lead a small part of this revolution.
Accelerate 💯 💕 🚀
 

 

Dig Deeper