Navigating AI: A Guide for HR Professionals | Benefits Collaborative

What is AI?

Artificial intelligence (better known as AI) is an umbrella term for a machine’s ability to make predictions, recommendations, decisions, and perform other tasks that would normally require human intelligence. Generative AI models, for instance, can create text, image, audio, and video in response to user prompts. ChatGPT is a kind of generative AI tool called a large language model. It functions similarly to the text predictor on your text messaging app—the feature that predicts and suggests what your next word will be—but at a much greater scale and with much more sophistication.

It’s important to note that AI is not actually intelligent. It isn’t cognitive or aware. If you asked ChatGPT to give you a compliment, the AI model would say something nice about you, but it wouldn’t mean it. It isn’t capable of feelings, perceptions, or opinions. Given this limitation, AI should not be used as a substitute for human judgment.

The Legal Landscape

All the laws that govern employment still apply when you use AI to help make decisions or take action. Hiring and promotional decisions based on AI must still be free of discrimination. AI used in conjunction with providing and administering employee benefits must comply with the Employee Retirement Income and Security Act (ERISA) for covered employers. Using AI for data analysis must still comply with the Health Insurance Portability and Accountability Act (HIPAA), the Health Information Technology for Economic and Clinical Health Act, and other laws. AI does not absolve you of your compliance obligations.

As more and more AI solutions enter the market and AI becomes further integrated into the workplace, we can expect legislative and regulatory activity. Illinois and Maryland (as well as New York City) already regulate the use of AI in hiring.

Best Practices

If you decide to leverage AI for HR and compliance purposes, we recommend the following practices:

  • Be diligent when considering and testing AI tools—no AI tool will be perfect, but some may be more reliable than others. Consult with an attorney when vetting AI vendors and reviewing contracts.
  • Maintain the highest level of privacy practices and standards with all information exchanged with an AI tool.
  • Implement and enforce an AI policy or set of guidelines so employees understand how they should and shouldn’t use AI at work.
  • Rely on human expertise to evaluate what AI creates for you. As when using any knowledge-supporting tool (e.g., a search engine), assume it can and will make mistakes.
  • Set aside time to fact check information and materials created by an AI tool and monitor AI use for discriminatory outcomes and other unlawful practices.
  • Make sure any AI product your organization uses aligns with and contributes to your business needs.
  • Keep your actual pain points in mind when thinking about ways to leverage AI tools. Survey employees about aspects of their work they dislike the most and areas of their work they think may benefit from an AI solution.
  • Develop an AI strategy that explains what you’re using AI to accomplish and how you’ll measure success. Periodically evaluate your uses of AI against those goals and metrics. For example, if a goal for using AI is to save time, does using it in fact save time?
  • Be transparent with employees regarding your point of view and intentions related to AI. Not everyone is excited about AI and what it means for their jobs. People have very strong feelings about it, positive and negative. As you develop and implement AI practices, monitor morale, solicit employee feedback, and show your appreciation for it. You’ll likely get more buy-in from employees if they have a say in how AI changes their work.
  • Encourage employees to share how they’re using AI and what’s working and not working. Ensure that everyone feels safe raising concerns, asking for help, or admitting that AI isn’t working as the company may have hoped.
  • Plan for continued education and constant monitoring. AI technology is advancing rapidly. Employees will need regular training as models develop and new laws pass.
  • Continuously monitor federal and state law.

Practices to Avoid

Some practices may spell trouble for your organization. We recommend avoiding the following:

  • Assuming an AI model or its output complies with federal and state laws. When asked to draft a termination letter, for example, an AI tool may produce a letter that cites reasons for the termination that it pulls out of thin air—and those reasons may even be unlawful. Don’t hand over AI generated resources or publish AI produced copy without thoroughly vetting it.
  • Assuming AI’s sources are reliable or real. Just because AI tells you a law, regulation, or court case exists or says a certain thing doesn’t mean it does.
  • Allowing yourself to be persuaded by AI’s confident tone. AI can sound authoritative when what it’s telling you is wrong or completely made up.
  • Relying on AI to make employment-related decisions. AI does not provide you with a “get out of liability free” card.
  • Using AI technology to analyze employee data containing protected health or personally identifiable information.
  • Creating legal or legally required documents with generative AI.
  • Uploading anything into an AI model that you wouldn’t want shared publicly.
  • Replacing human expertise with AI content.

Originally posted on Mineral

Top