This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology,
Labor/Employment

Nov. 9, 2023

Five ways AI will change the workplace of the future

Employers may see the trade-off as an opportunity to trim budgets by eliminating jobs that can be done faster and cheaper by AI. But they must understand the legal and HR implications of job automation, including potential job displacement and the need for upskilling or reskilling programs. Employers – whether construction, legal, hospitality, education, or any other industry or profession – must start making changes now.

Jack Schaedel

Employment Law Neutral , Alternative Resolution Centers

UCLA Law School

Jack Schaedel is an employment law neutral with Alternative Resolution Centers. Over the course of his career, he represented and advised both employees and employers in high-stakes litigation while also mediating cases through the LA Superior Court. He founded and chaired the Labor & Employment Section of the Pasadena Bar Association and serves on the Executive Committee of the Labor & Employment Section of the Los Angeles County Bar Association

Artificial intelligence - for better or for worse - has inserted itself into every aspect of our lives. What was once science fiction material is now commonplace, as workplaces across the country integrate AI into their processes with differing levels of awareness of the consequences.

AI has generated both interest and concern at the highest levels. On Oct. 30 President Biden signed an executive order that will require businesses to develop safety and security standards, introduce consumer protections, and give federal agencies oversight of the evolving technology.

According to a Harvard Business Review paper on AI in the workplace, most tasks will continue to require humans, but new tasks will emerge that demand new skills. As AI and machine learning gradually replace workers at different levels, businesses will need to be agile in adapting and updating job skills and assignments.

The next few years will see at least five significant changes in how we think about and conduct work, and the law will respond to these changes. Employers must therefore understand the legal implications of these changes and must take steps now to limit harm to workers and to avoid potential litigation.

1. Automation of routine tasks

AI technologies, such as robotic process automation (RPA) and machine learning algorithms, are automating repetitive and mundane tasks. While this should save time and resources, allowing employees to focus on more complex and creative work that requires human intelligence, it will also render certain job functions obsolete.

As AI automates tasks requiring minimal thought, workers will use their minds in different and more sophisticated ways. In a way, this is nothing new. A construction worker who is given an excavator to perform digging can now perform the task much more quickly but requires new training to use the new tool.

In the 20th century, the construction employer and its workers would have faced several legal concerns as a result of the transition to a new tool. The employer may have asked whether it was required to provide training to its worker on how to operate the excavator. The worker, whose job became more productive, may have asked whether he could now demand a higher wage for using a different skill set. The employer might then have sought legal advice on whether it could demand a higher level of productivity in return for that higher wage.

Similar questions will be raised by AI in the 21st century. Consider a lawyer's job. Just as with construction workers, every lawyer must do some mundane tasks as part of the job. This includes document review and time entries, tasks that can be performed or improved by AI. Tools such as Relativity, ChatGPT, Lexis+ AI, and Casetext are continuing to take over certain legal tasks, freeing attorneys to focus on tasks and assignments that require higher-level thinking.

An attorney who has access to such AI tools can now work much faster. As with the construction worker, that attorney may now need new training, may be able to ask for higher wages, and may be expected to show increased productivity.

Employers may see the trade-off as an opportunity to trim budgets by eliminating jobs that can be done faster and cheaper by AI. But they must understand the legal and HR implications of job automation, including potential job displacement and the need for upskilling or reskilling programs. Employers - whether construction, legal, hospitality, education, or any other industry or profession - must start making changes now.

Every day, the AI workplace becomes more of a reality, and companies should now be developing training programs that will enable workers to effectively incorporate AI tools into their jobs. If job displacement is a real concern, employers must be transparent with their employees, explaining as early as possible what the business will be doing to incorporate AI into its process, as well as how this will affect them.

2. Enhanced decision-making

AI can analyze vast amounts of data and extract valuable insights quickly. By leveraging AI-powered analytics and predictive modeling, businesses can make better and quicker decisions on everything from logistics to pricing to how their workers are paid. The California Fair Pay Act, which became effective Jan. 1, 2016, was intended to ensure that all employees received equal pay for performing substantially similar work, regardless of their gender or other protected categories.

But pay disparities continue to be prevalent at all job levels. It turns out that AI may be the solution for this pervasive problem. According to the Society for Human Resource Management, AI can help develop compensation metrics that reward employee efforts to advance an organization's goals. It can analyze labor market data to provide localized and up-to-the-minute competitive pay rates.

Employers should take steps now to ensure that existing pay disparities are not perpetuated through the use of AI. AI programs use algorithms that were created by humans, but if the humans are biased so too will be the algorithms used to make decisions in the workplace.

According to the director of the FTC's Bureau of Consumer Protection, "You can save yourself a lot of problems by rigorously testing your algorithm, both before you use it and periodically afterwards." To ensure fairness and comply with privacy and data protection regulations, employers should therefore understand how their decision-making algorithms were created and act to reverse any inherent biases.

3. Recruiting

AI is widely used by recruiters to screen and identify candidates, source online platforms and databases, assess candidates' skills, conduct video interviews, and evaluate experience. Unfortunately, because of built-in biases AI may favor certain information or words while rejecting others. Applicants may never know what particular word or words could have made the difference for them.

An audit might, for example, find that software inadvertently discriminates against people who seem to be over the age of 50 based on their resume. Even though the result was unintended, the employer could find itself in violation of the federal Age Discrimination in Employment Act or California's Fair Employment and Housing Act.

This potential for discrimination has raised a red flag for federal and state legislators. Last year, Congress introduced the Algorithmic Accountability Act of 2022, which would require the FTC to establish rules for companies to conduct impact assessments of their AI systems. In October 2022, the Office of Science and Technology Policy, concerned about the impact of AI on the hiring of individuals with disabilities, released its Blueprint for an AI Bill of Rights. In January of this year, the Equal Employment Opportunity Commission held a hearing on how to prevent unlawful bias in the use of AI, and it created a webpage entitled "Artificial Intelligence and Algorithmic Fairness Initiative," which looks at fairness in the AI-utilized workplace. On April 17 the Department of Labor hosted an online "think tank" to examine the use of AI tools in hiring.

At the local level, New York City adopted Local Law 144, effective July 5, which governs employers' use of "automated employment decision tools," or AEDTs. The law requires companies to conduct an audit through an independent auditor to determine whether there is bias in the recruiting tool and post audit results on their websites. Those that fail to comply within a year of using the AEDT are barred from using AI to screen candidates for hiring or promotion.

In California, A.B. 331 targets discrimination from AI software in employment, education, housing, utilities, health care, financial services, and other areas. It would require both developers (creators or coders) and users of the automated tool to submit annual impact assessments to the California Civil Rights Department by 2025, and the Department would be able to impose a $10,000 fine for each day an impact assessment was not submitted. This would be the first bill in the nation to divide those responsibilities, according to Bloomberg Law.

Under the proposed bill, companies using AI would be required to publish a policy listing the types of tools used and how risk of discrimination was managed. For decisions made solely through AI, affected individuals would be given an opt-out right, provided it was "technically feasible." This could raise compliance challenges, as it may not be clear what would constitute a violation or what would be considered "technically feasible." The bill was not signed into law in 2023 but will be reconsidered in 2024.

As with all uses of AI in the workplace, employers should take affirmative steps now to ensure that all algorithms used in recruiting are free of bias and that processes remain accessible to all candidates, keeping accommodation issues a priority. To the extent that federal or state laws may ultimately govern this issue, companies must make sure they are in compliance with such laws.

4. Intelligent virtual assistants

Virtual assistants (VA) powered by AI, such as chatbots or voice assistants, will become more sophisticated and prevalent in the workplace over the next few years. These assistants can handle routine inquiries and streamline interactions with customers and employees. Gartner predicts that by 2025, 50% of knowledge workers will use VAs daily -- up from 2% in 2019.

One Fortune 500 enterprise software company reportedly created a generative AI tool to assist customer service agents. Rather than leaving tasks to an "automated decision tree" and only bringing in human operators to troubleshoot, humans were kept entirely in the loop. Researchers compared the performance of groups who were given the tool with those who weren't and found that the tool significantly improved the performance of lower-skilled agents.

Companies will need to seriously consider the legal implications of using VAs, particularly with respect to data privacy, security, and employee monitoring.

5. Personalized learning and skill development

AI will revolutionize training and skill development by leveraging adaptive learning algorithms to deliver personalized training programs, identify knowledge gaps, and enhance employee learning and skill development. Employers can use AI to address different learning styles, providing a personalized learning experience for their employees. AI-powered virtual coaches can guide employees through their learning activities and personalize the learning platform to recommend content, monitor progress, answer content-related questions and send push notifications about content or deadlines to employees.

Using AI, employees are better able to identify employee skill gaps and suggest ways to close them. Machine-learning algorithms can predict outcomes, which allows employers to provide specific content that is based on the learner's past performance and individual goals. When such AI-driver learning programs are used to supplement training employers are already providing, the human element is still present. AI offers more brain power for learning and training without demanding more time.

By using AI training programs, employers will have a means to measure employee efforts toward professional improvement. They will be able to track this training throughout the year, to measure how much training each employee did, and take this into consideration when determining promotions and pay raises.

While they take appropriate steps to protect intellectual property rights and address potential issues related to data privacy and security, employers should use AI training programs to ensure equal access to learning opportunities. They should make sure that employees are paid during their training and take steps to ensure that employees get the full benefit from all the training they complete.

#375617


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com