The Future of Work: Artificial Intelligence and teamwork
AI and The Future of Work
The Future of Work is discussed under the umbrella “A World Beyond HR” at a 2-day conference in Germany. Rebecca Lundin, co-founder at Swedish Celpax, will be talking about Artificial Intelligence from a team perspective.
The Future of Work is still very much about humans… albeit heavily shaped by Artificial Intelligence.
Now, of course, there is not a clear view of how artificial intelligence will transform our workplaces.
What we do know is that – as always in history – companies and employees will have to adapt to the drastic changes which lie ahead.
Some changes are already here. Some will soon come knocking on your company door.
Is your organization future proofing?
How Artificial Intelligence and People Will Work Together in Teams
The German Think-Tank 2B Ahead is organizing a future of work conference where speakers will spark debates around topics like how Artificial Intelligence and people will work together in teams.
How do we get Artificial Intelligence to become team members who act in the best interests of your company?
We also need to future-proof our existing colleagues and focus on how to develop a growth mindset.
Artificial Intelligence and Decision Making
A burning topic these days is Artificial Intelligence and decision making. What roles should AI have?
What are its competencies, its limits and where does it optimally complement us humans? Which levels of decision-making authority make sense?
What are the benefits of artificial intelligence?
AI dramatically improves our analytic and decision-making abilities. It helps us get the “right” information at the right time.
From a decision making perspective we are faced with questions like if it should it be possible to lose one’s job on the decision basis of an algorithm.
Drawing lines is tricky. And let’s not forget, human decisions aren’t perfect either. Most things that we are currently accusing AI of such as bias, could probably be said for humans too.
How does AI decide in an ethical dilemma?
There is a need to talk about the areas where AI best compliment us humans.
And to determine tasks currently carried out by humans that could be replaced by AI.
Can you imagine AI as a colleague? Or even your boss?
So imagine that tomorrow all the processes are in place and you end up greeting your new digital team colleague called Artificial Intelligence.
What do both sides have to learn from each other, in order to successfully work as a team?
How do we get to know each other best? Is it the same as having a human co-worker?
Can AI become part of the team if it never comes to lunch and talks about its hobbies? “Machines are generally poor at understanding a person’s mood”, according to Lynda Gratton, founder of The Future of Work Research Consortium.
How do we get AI to “live” the values of our company?
Is it the same if the AI is your colleague, compared to the AI being your manager?
What are the chances and dangers of an automated boss? And are we, as humans, ready for such a setting on an emotional level?
AI thinks as far as the data set it was trained for
AI is very efficient for certain things, but only thinks as far as the data set it has been trained for.
There is a need for massive training data.
Let’s talk about who trains it.
How do we overcome the potential bias in the training data like the machine training specialists detected at Amazon?
Or facial recognition systems that recognize white faces with a much higher accuracy than black faces:
“If the photo was of a white man, the systems guessed correctly more than 99 percent of the time. But for black women, the percentage as between 20% and 34%.”
The list of examples like this is a bit too long.
Because us humans aren’t impartial. So if we train them… the result most likely won’t reflect just how diverse our workplaces are.
Machine training means learning by examples. What examples do we feed the data set?
As we teach machines, they will adopt our imperfections. Our beliefs, moral, ideologies, etc. However, this will all be adopted without an important filter – the moral compass to evaluate. It’s hard to train intuition.
How soon before the AI can help us (and itself) to detect and improve biases?
What kind of training and what rules would be necessary?
As a colleague, do I have to know how it’s been programmed, or rather its programmer, in order to know what makes it tick?
How the AI collects and processes data is an interesting angle. What level of transparency is currently available in this area?
And if there is transparency, is it presented in such a form that non-programmers can make sense of it?
That way, you can easier determine when to trust the AI to make decisions on its own, without supervision. And also to determine when you need to activate mechanisms to intervene when you see there is a problem.
This will also require training for humans… and an open mind.
Without a growth mindset, many workers will see their job go to colleagues (or new recruits) who have learned how to cooperate with machines.
A study from Accenture showed that “CXOs believe only 26 percent of their workers are prepared to work with AI—which explains why companies are not investing enough in skill-development programs”.
This test confirmed a human tendency to apply greater scrutiny to information when expectations are violated.
MIT’s platform called the Moral Machine did a crowd-sourced survey to find out how people, worldwide, though moral decisions should be applied when it comes to self-driving cars.
People were asked to weigh in on the classic “trolley problem”. They all chose which car they thought should be prioritized in an accident.
The results showed a huge variation across different cultures.
Again, who are training the AI, and with what culture lense? Where are they based? Where will the AI act?
PS. Take the test! Freakishly hard to determine. Here in Sweden for example, cars are obliged to stop at any zebra crossing if a pedestrian is about to cross – independently if there is a traffic light or not. Would that add to the cultural differences? The Moral Machine’s goal is to help reflect on important and difficult decisions.
Respecting Human Workers Integrity
There is also the question of transparency the other way around: with listening capabilities, what conversations might the AI pick up on in the workplace?
How will companies use that information and make sure human workers integrity and sensitive information is kept safe?
Add to this that we humans seem to want to maintain the illusion that the AI truly cares about us.
Is our workplace law keeping up with the rapid technology advancement? What regulations need to be revised?
Do you remember your first day at work?
Our future AI teammates could be onboarded using a similar pattern to how trust is currently built between colleagues and managers. Show them the ropes!
Start with trusting them with small tasks or routine tasks that are easy.
Once verified and when trust has been established, it can dive into real work and add complexity and importance as you go along.
To sum up:
Thomas Koulopoulos says: “The challenge of artificial intelligence isn’t so much the technology as it is our own attitude about machines and intelligence.”
We need to talk more about what changes are needed to make sure humans and AI can work together as a team, in the most effective way.
Leaders need to figure out their roles in prepping their people for the future of work.
And have some fun while at it
See you in Germany?