Skip to main content

No shortcuts for Generative AI Upskilling

Cambridge, MA, June 13, 2024 (GLOBE NEWSWIRE) -- Junior professionals may be more able to engage in real-time experimentation with new technologies, but MIT Sloan School of Management professor Kate Kellogg found that when it comes to generative AI, they shouldn’t be a reliable source of technical expertise or training for senior professionals.  

In a new study conducted with Boston Consulting Group, Kellogg and her co-authors, including Hila Lifshitz, professor at Warwick Business School in the United Kingdom and visiting faculty at Harvard University, along with Steven Randazzo, Ethan Mollick, Fabrizio Dell’Acqua, Edward McFowland III, Francois Candelon, and Karim Lakhani, go against the grain of what common literature suggests and explore the obstacles associated with older workers rapidly upskilling themselves from juniors.  

When it comes to emerging technologies like generative AI, these younger professionals are the ones who dive into experimenting with them first,” said Kellogg. “They’re ultimately looked to by upper management as being sources of expertise, even though they aren’t experts on the new risks that generative AI poses because of its uncertain capabilities and exponential rate of change.”  

Existing research by Matthew Beane, SM '14, PhD '17, shows that organizational leaders and senior professionals often look to junior workers for guidance in using new technologies effectively since juniors are the ones closest to the actual work itself rather than removed because of responsibilities for oversight and management. However, this research has not considered emerging technologies — including generative AI — and tends to ignore the novel risks they present.  

Thus, the study posed a main question: How and when might junior professionals fail to be a source of expertise in the use of emerging technologies like generative AI for more senior members?  

“In order to appropriately address this problem, we had to investigate it in the context that generative AI can be easily accessed and customized by users of varying skills and technical backgrounds,” Lifshitz explained.  

The researchers conducted interviews with a group of junior consultants — associate- or entry- level employees with 1-2 years of experience with little prior experience with using this technology — who were given access to OpenAI’s GPT-4 to help solve a business problem. Consultants were then asked: Can you envision your use of generative AI creating any challenges in your collaboration with managers? If yes, how do you think these challenges could be mitigated?  

Junior workers suggested that managers — those with 5+ years of experience — would be concerned about the risks that generative AI posed to valued outcomes, including accuracy, explicability, and relevancy of the outputs. Juniors suggested mitigating these challenges by making changes to human routines at the project level such as having managers review juniors’ prompts and responses and gaining agreements within the team on the conditions under which generative AI could be used reliably. 

Relying on junior staff raises more risks than rewards   

Historically, the main obstacle with juniors teaching senior professionals to use new technologies is a threat to status felt by senior workers. However, the study produced a different set of main obstacles that contradict existing research. The three key obstacles that illustrate that juniors may not be reliable in teaching seniors are:  

  1. Juniors’ lack of deep understanding of generative AI’s capabilities  
  2. Juniors’ focus on mitigating risk through change to human routines rather than system design 
  3. Juniors’ focus on interventions at the project-level rather than system deployer- or ecosystem-level

“Juniors who are working at the project level with their managers are quite likely to think about addressing challenges at that level. But, because generative AI platforms pull in extensive data and other information from a broad ecosystem of actors, interventions to address generative AI risks can’t be limited to lower-tier, project-based inputs. It’s not the whole picture,” said Lifshitz  

Fast experimentation and iteration are key, but on a broader scale  

Thus, the researchers suggest moving beyond a focus on local experiments around human-computer interaction, and into a much wider field of context where all risk factors are considered to best mitigate the gap in transitioning to these technologies. Before implementing generative AI practices into the workplace, organizational leaders should mitigate output risks by:

  1. Making changes to system design, such as by fine-tuning a model’s parameters based on additional, specialized data
  2. Setting up automatic monitoring with a second system 
  3. Designing generative AI systems to begin with a prompt to the user to communicate their goals and preferences to the system
  4. Interacting with developers to specify requirements such as assessments of the representativeness, robustness, and quality of their data sources, upfront explanations of the systems’ capabilities and limitations
  5. Flagging and correcting of misleading outputs

“Simply expecting younger working professionals to learn to use AI tools through trial and error and pass their tips and tricks on to senior professionals is not going to bridge the learning gap required for effective implementation of generative AI,” Kellogg said. “It’s right that rapid experimentation and learning are key. But professionals and leaders need to experiment with changes to generative AI data, models, and infrastructure in addition to changes to human routines. And they need to mitigate generative AI risks not only at the project level, but also at the firm level and in their external interactions with generative AI developers.” 

Attachment


Matthew Aliberti
MIT Sloan School of Management
7815583436
malib@mit.edu
Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.