In a sea of seemingly unending coverage about AI, particularly chatbot tools like ChatGPT, too little has been said about the organizational challenges that will be wrought by these technologies. Like past technological evolutions, the tech itself often creates a profound impact on how we structure and manage organizations. Previous generations might struggle to imagine entire departments and industries centered around Information Technology, just as a previous generation of workers might still be confused about what Digital Marketing departments and their teams do on a daily basis.
AI, particularly through AI development services, promises to be even more impactful, as it’s perhaps the first mass-market technology that can self-direct to a limited extent. Where previous technologies created a consistent output based on a well-defined set of inputs, AI tools act a bit more human in their ability to produce varied outputs based on often-limited input.
A Python script won’t work unless the code is properly structured and the required data are available, but ChatGPT will happily write a song or write a comparison of two sports cars with little more than a one-sentence prompt. AI tools are even developing the ability to collaborate, with a “team” of AI-driven bots performing tasks like writing marketing copy while another bot proofreads the copy, and finally, yet another bot formats it for posting to a website.
The addition of increasingly autonomous bots will require management teams to develop new skills and techniques to harness what may be the most significant change in the workforce in decades. Here are some of the skills and major changes that will likely be triggered by an increasing role of robots in the workforce.
Tech Leaders as R&D
Most tech leaders are used to being consulted about emerging technologies that might range from complex AI systems one day to getting help picking out their CEO’s smartwatch the next day. This function will become increasingly in demand with the emergence of AI.
Once again, technology has captured the public imagination with a ferocity not seen since the early days of the web or smartphone. Tech leaders will have a chance to become internal strategic advisors around these emerging technologies, helping their colleagues understand and assess the capabilities and limitations of these tools.
Ideally, tech leaders will also launch more formal initiatives to test and apply these tools in limited capacities to their organizations. While it might seem like taking on additional, perhaps unfunded projects, having a thoughtful position on the application of AI, combined with demonstrations and experimental results, will make you the recognized thought leader on AI.
While it might seem natural that a tech organization should lead the deployment and management of AI tools, many of these tools are becoming more “digital workers” than digital tools. As such, they can be effectively deployed to team leads, managers, and knowledge workers who may be outside IT. The human-like interfaces to tools like ChatGPT make it appear that these are generalized virtual workers that may not need management or maintenance from the tech or digital department.
However, these technologies are not without risks and nuances. For tech leaders to maintain their relevance, they should understand and be at the forefront of AI adoption in their companies.
An Accelerated 24/7 Culture
Many have lamented the “always on” culture of the modern workplace, particularly at managerial and leadership levels. Increasingly, global connections at even smaller companies have spawned global teams that are pursuing opportunities (and finding problems) around the clock.
Most of the concerns about this culture stem from the very human need to rest and recuperate, with countless studies demonstrating the increase in performance that comes from rest and recovery. However, for bots these concerns are moot. At least in their current state of evolution, bots don’t complain about downtime, don’t require rest, and will happily work around the clock with no coffee break.
This might seem great from a productivity standpoint, but leaders need to consider who will monitor and manage these relentlessly productive bots. Obviously, no human will be able to match the stamina of a machine, so leaders need to consider how to appropriately balance the needs of humans with the performance of machines.
While there are certainly lessons on how to manage and maintain highly productive machines from automated factories and the like, perhaps the more vexing aspect of this challenge is that bots generally produce an output that will require some human analysis and further refinement. Not only will our teams be augmented with endlessly productive AI counterparts, but those counterparts will increase the volume and velocity of the information those individuals must process.
Human Language, without Human Norms and Mores
Another significant challenge for executives when adopting new AI tools is that these machines are convincingly human in their outputs. They demonstrate a degree of creativity and, like a human, can produce an output with minimal input and then refine that output through ongoing conversations.
However, that “human-like” interaction occurs without the other elements of humanity, both good and bad. These tools have no moral code, no fear of being fired or lying, and no sense of what may or may not be appropriate in the workplace beyond the training and adjustments incorporated by their creators.
While this might sound like the background of a science fiction novel, there are very practical implications of this fact: an AI may provide a fabricated or incorrect answer without qualms or concerns. There’s no malintent behind this occurring, rather, some element of the AI’s training caused it to provide an incorrect answer.
Admonishing a human who routinely provided false information might ultimately cause that individual to be more careful in their research and responses, but with an AI, the root cause is more mercurial and more difficult to detect and correct.
In a similar vein, a human can generally “show their work,” citing the sources they used and the thought process they followed to generate a response. Tools like ChatGPT are a bit more vague. When I asked ChatGPT to cite sources on a brief analysis of D-Day, the tool made an appeal to authority, responding that “my knowledge is based on general historical information widely available in reputable historical accounts and documents.”
Responses like this are fine for generating content or general research but might not have the rigor required for launching a major new initiative, directing investment, or managing areas that might impact life and limb.
Executives need to be aware of the capabilities and limitations of these tools and not assume that they’re infallible. The adage to “trust but verify” is appropriate, and leaders can and should demand human verification of information that might be used to make a critical decision.
Leaders will also need to rationalize the fact that most AI-based tools, particularly those that use some variation of neural networks or deep learning, cannot be “debugged” in the way that a spreadsheet or simple computer program can. Many of these algorithms are self-adjusting, so even if someone had access to all the code and all the data, they’d be unable to assess why an AI made a particular decision. Our regulatory and public markets have yet to fully grasp the inability to audit an AI tool, and executives will need to develop organizational checks and balances to work around this fact.
The Rise of the “Bot Wrangler”
Just as the increasing use of computers created the IT department, the rise of AI is likely to create a new organizational structure to acquire, train, and maintain a fleet of robotic workers. New roles will also likely emerge to break general tasks into work for individual bots, and then process and consolidate their outputs.
There are no universal “best practices” for how to set up a bot management organization, so be open to experimentation and adjustment.
The most dramatic shift will be for organizations that, for years, optimized for individual contributors or knowledge workers who performed highly skilled tasks in isolation or as part of an informal team with few or no direct reports. In the bot era, adding incremental virtual team members will have very little cost, allowing even the most junior employee to have a large team of bots at their disposal.
While these “bot wranglers” won’t have to deal with the foibles of large groups of human beings, they will have to learn the art and science of breaking complex tasks down and assigning them to the right resources for completion. Individuals who are used to working alone may struggle in this environment, and skills that made someone a high-performing individual contributor may not map to managing a team of bots.
There’s rampant speculation that AI will result in a significant loss of human jobs. However, while computers certainly put many typewriter salespeople out of business, they also created entirely new industries of previously unheard-of jobs.
As leaders, we should be mindful of this evolution and begin building the skills needed to manage a team of virtual workers.
Rethink Your Junior Roles
One area where legitimate talent concerns abound is around junior staff. For most technology departments, entry-level engineering positions, often filled when companies hire developers, are the first step in a long-term tech career. These junior engineers write and debug code, manage testing, and gather basic functional requirements, all tasks that AI tools can perform increasingly effectively.
It makes little sense to hire a junior JavaScript developer when ChatGPT can write code at a similar skill-level. However, today’s junior engineer is tomorrow’s technical architect, team lead, or executive.
As ChatGPT and similar tools start replacing entry-level tech jobs, make sure you’re thinking about how you’ll maintain and grow your tech talent pipeline. Perhaps the person who would be an individual contributor can now manage two or three bots and grow their talents as a junior system architect or functional designer. Perhaps your junior resources might collaborate with more senior engineers, identifying and implementing new opportunities for automation.
Whatever the case, avoid the temptation to create an overly top-heavy organization that lacks any pipeline for young talent to develop and grow into tomorrow’s leadership. Eliminating your junior roles without considering new ways to employ less experienced tech workers might seem like a smart cost-cutting move today but will backfire when you’re forced to hire expensive external talent months or years in the future.
AI tools are clearly here to stay and will continue to evolve at a rapid pace. The workforce of the future might consist of individual humans managing a team of a dozen bots or may evolve into a centralized corporate “bot farm” that’s controlled by a few human wranglers. Regardless of which extreme takes root, tech executives have a chance to define and lead one of the biggest technology transitions yet.
With some consideration of the organizational impacts, understanding of the rapidly evolving technology, and a thoughtful approach to how humans’ roles will change, we can position tech to lead the transition effectively.
If you enjoyed this, be sure to check out our other AI articles.