AI is going to fundamentally change the fabric of our society. Dr. Eric Schmidt suggests that within a year the vast majority of programmers will be replaced by AI programmers. Satya Nadella shared that Microsoft engineers are using AI to help write software (up to 20-30%) in a recent chat with Mark Zuckerburg. Even DARPA is creating exploratory models of Human-AI teams to understand how Humans+AI will work together.

AI is forcing us to reflect on the work we do and ask ourselves the question of whether it has value or not.

Taken to the extreme, folks are exploring using AI to build software end to end from scratch. A recent example I read was from Jonathan Barned who used the robust and detailed http 2.0 spec to get a LLM to build a web server from scratch. Super compelling to read about, and implies that, with a detailed enough specification and validation tools, AI can write software end to end. This opens a glimpse into the evolution of roles where humans can define the specs and build the validation requirements to give AI the right guardrails, accelerating desired outcomes.

Its even upending the startup world. You can now vibe code your way to $1 million ARR in 17 days. ARR, or Annual Recurring Revenue, has always been a key metric used to measure the predictable and consistent revenue a business expects to receive over a year from recurring subscriptions or contracts. It used to be that achieving $1 million ARR in year one would stand you in really good stead as a start-up. Now the top start-ups are 5x-ing that and it is AI that is changing the game. Andresson Horwotiz has a nice breakdown on this and I love that they call out the fact that speed is the moat.

As AI tools move from sidekick to core contributor, traditional team structures are starting to evolve. We’re entering a world where teams are blended - made up of humans, AI agents, and new hybrid roles designed to coordinate them. Microsoft lays this out with the concept of a Frontier Firm.

Even startup founders are looking to have an AI CTO or AI CMO or even an AI co-founder instead of their human counterparts.

This evolution isn’t just about adding a bot to Slack or Teams. It’s about re-architecting workflows, Reshaping how roles interlock, and how accountability is shared. It also means that where teams were traditionally formed around domain knowledge and functions, we will see an evolution to outcome-based teams that are dynamically formed around outcomes and goals as deep functional knowledge will be more readily available on demand.

We need to rethink how teams are formed and how collaboration will work in this era of AI. As we think about teams, it is likely that roles will change. There will be a paradigm shift from doers to orchestrators with humans spending less time doing, more time directing, editing and integrating AI tools and workflows in an evolved human-ai collaborative engagement. There will be a stronger emphasis on the outcomes and strategy with a focus on helping guide a hybrid-AI team towards that.

In the world of software engineering a human role may shift more to a code reviewer and AI supervisor where AI would focus on boilerplate code, mundane tasks and devs can focus on the higher level architecture, overall quality, and edge cases. This could also mean an emergence of newer roles, such as prompt engineer or AI product manager or human-in-the-loop specialists. Humans would be focused on framing the core problems well and this will become a more important skill going forward.

Teams will evolve to also include AI team-mates who will show up as if they are another employee on your team. We are starting to see this already, with light-weight AI team-mates showing up in the form of meeting note-takers, researchers, content helpers and we can imagine this evolving even more as AI agents get more sophisticated and domain-aligned.

In practice, this means a team might consist of:

  • 4 humans
  • 2 AI agents (e.g., “WriteBot” and “ResearchBot”)
  • 1 human responsible for supervising and tuning the agents (e.g. an “ai lead”)

This will lead to new types of team structure. We will see the rise of AI-enhanced pods with a mix of humans and AI Agents. We will also see a rise of AI-human layered teams where humans set the strategy, vision, outcomes, AI agents execute against this and humans review, refine and make final decisions.

These AI teammates will be deeply integrated across the tools and ecosystems used everyday. They will show up alongside humans, and be something that can be continuously collaborated with as part of native workflows.

Imagine how a content marketing team in 2030 could look:

  • Creative Lead (human)
  • Prompt Curator (human)
  • Brand Voice AI (agent)
  • Image Generation Bot (AI agent)
  • Content QA Bot (AI agent)
  • Fact-Check Specialist (human)
  • Audience Optimizer AI (agent)

But this will not be without challenges. As we move to AI-infused teams, questions arise that will need strong answers.

  • Redefine accountability: Clarify who is responsible when an AI makes a mistake and determine ownership of AI-generated output.
  • Foster AI literacy: Ensure every team member understands how to interact with and guide AI systems effectively.
  • Establish AI ethics and usage policies: Develop internal guidelines for appropriate AI use, output review processes, and when human oversight is necessary.
  • Monitor AI health and performance: Track AI agent metrics such as uptime, performance, and hallucination rates. Similar to how human team members are evaluated, should AI agents also be evaluated?
  • Team Culture: How do you build a healthy team culture, when some team members are AI? Does AI here reinforce existing biases, amplify them or challenge them? And are all team members included and empowered to use AI agents and tools?

Some of this will be good old change management but some will be pushing into new frontiers.

This could also bring other challenges to the mix. As more AI is used, do humans start to become over-reliant on AI, leading to skill atrophy? If I don’t need interns anymore because AI does the work of 10 interns, where do interns go to learn valuable early work skills? Do those early work skills change?

This article in the Washington Post recently highlighted, where more folks were skipping meetings and sending AI notetakers resulting in AI outnumbering humans in meetings. If we are all delegating to AI then are we missing things from the human connection that is so important to move humanity forward? How do we strike the right balance?

Lots to think about and we are just at the beginning here.