• Print

Your AI-Powered Team: Empowering And Trusting Them To Leverage AI

AI is a powerful tool, but only if you and your team use it responsibly, safely, ethically, and effectively. And it’s evolving so fast you can’t rely only on rules and policies - you need to trust your team members to exercise good judgement and use AI wisely.

Your AI-Powered Team: Empowering And Trusting Them To Leverage AI

There’s a lot of talk, hype, and noise about the impact of AI on work and the workforce:

  • Will it create jobs?
  • Will it replace jobs?
  • Is it going to destroy the fabric of society?


In a recent article, 40% of CEOs predicted that AI would destroy humanity within the next decade! This is a ridiculous claim, and you should ignore it - for three reasons: It was a tiny sample of (mostly American) CEOs; an existential threat to humanity is not their area of expertise; and  most importantly - regardless of whether we believe this or not, it’s not very useful to most of us.

So, as a leader or manager, what should you be thinking about instead?


AI will definitely have an impact on your organisation, your work, and your team. So, it’s essential to consider that impact and how to leverage this powerful technology.

A more relevant story that surfaced a few months ago was about Samsung, one of the world’s smartest tech companies. Some Samsung software engineers got into trouble for asking ChatGPT for advice with improving their software. ChatGPT did help, but the engineers hadn’t thought about the fact they were uploading proprietary code to a third party!

Samsung management was horrified and immediately banned all engineers from using ChatGPT. But they are now developing their own internal AI tool for their engineers.

Most of us don’t have the resources of a giant company like Samsung to build our own AI tools. So, as a leader or manager, how can you help your team use AI tools like ChatGPT effectively?

It all comes down to trust


You can create guidelines, rules, and policies, but they only go so far. At one extreme, you could ban AI tools, but that’s risky, because others, including your competitors, will use them. At the other extreme, a policy that allows unrestricted use of AI tools is also risky, as Samsung discovered.

Is there a middle ground? Can you establish some clear policies that cover all cases? Unfortunately, no. With AI technology evolving rapidly, policies and rules alone will never be enough. Instead, you need to trust your team.

Do you trust your team members to use AI effectively? If you don’t, the problem lies with you as a leader or manager, not with them.

To build trust and enable them to use AI effectively, consider four stages of trust:

  1. Information

    Give them access to the necessary information, tools, and resources.

  2. Skills

    Help them build the skills to use these tools effectively - through training, mentoring, coaching, and safe environments to practise.

  3. Judgement

    Encourage them to build good judgement when using these tools (for example, knowing when and how to question and check the results from ChatGPT).

  4. Wisdom

    Finally, help them understand a broader context when using these tools (for example, sometimes choosing NOT to use AI - as in the Samsung example).

 

Author Credits

Gihan Perera is a business futurist, conference speaker, AI researcher, and author who shows you how to be fit for the future in a fast-changing world. For more than 25 years, he has worked with organisations and leaders throughout Australia and the world, helping them to lead in uncertainty, act with clarity and confidence, and thrive in a fast-changing world. Learn more by visiting, https://gihanperera.com.

  • Print