The Rise of AI in the Workplace: A Double-Edged Sword for Business Owners & HR Practices

AI has quickly emerged as a powerful tool in reshaping much of what we do in the workplace, particularly in leadership and Human Resource roles. From automating administrative tasks to streamlining scheduling, onboarding, and benefits enrollment, the efficiency gains are giving hours a day back to leaders to focus more on highly strategic initiatives. But while there’s no doubt that AI is enhancing productivity, it also introduces serious concerns that every employer and HR professional should carefully consider when adopting into their daily practices.

The Hidden Risks of Inputting Confidential Information into AI Tools

One of the most significant concerns is the misuse or accidental exposure of confidential employee or company data through AI platforms like ChatGPT, Gemini, and so many more. Even when the intention isn’t malicious, employees may unknowingly input sensitive information, such as performance data, medical history, pay rates, or disciplinary records, into AI tools that are not secure or properly encrypted. This makes us wonder who has access to that information? Is it stored? Is it discoverable in litigation? Without clear internal policies and employee training, the risk of data exposure and potential breaches of trust is high. Confidential or personally identifiable information should never be placed into AI platforms unless they are designed with strict encryption, security protocols, and privacy compliance in mind. You might consider sending your team a quick reminder about this issue or sharing this post directly to help raise awareness!

Human & Business Centered Policies Can’t Be Automated

Beyond compliance, there’s also the matter of human connection. Policies and communications are most effective when they’re written with empathy, intention, and understanding of your company’s unique culture. When AI is used to generate templated policies or tone-deaf communications, it can come across as cold, impersonal, or dismissive of the human experience. Employees want a personalized approach that considers real workplace and human dynamics, not just algorithmic predictions. A human-centered policy isn’t just about legal coverage; it’s about building trust, reinforcing values, and creating a culture that employees are proud to be part of.

AI Isn’t a Substitute for Employment Law Expertise

We recently heard from a CEO who proudly mentioned using ChatGPT to write their employee handbook, offer letters, contracts, and more. While we admire the resourcefulness and agree that AI can be a valuable drafting assistant, there’s a real danger in relying on it as a stand-in for professional expertise. Your employees can sense when a policy lacks clarity, depth, or relevance, and that lack of attention to detail doesn’t go unnoticed. More importantly, AI can get it wrong, and we see this regularly. For example, we asked ChatGPT to confirm time off termination payout laws across ten U.S. states, and it got four of them wrong. ChatGPT can offer general guidance, but it often lacks the specificity needed for nuanced situations, like outlining exact payout timelines or distinguishing between different types of time off. In this case, a leader relying solely on AI may have overpaid an employee beyond what they were entitled to, unintentionally creating a financial strain for the business.

Another example we recently saw a company allocate a specific number of time off hours to all employees, and when asked why, the CEO shared that it was based on advice from AI. By following that guidance without further context, he ended up giving away tens of thousands of dollars’ worth of time during a period when cash was tight. That money may have been better directed toward bonuses or other meaningful incentives. It's a reminder that while AI can be a helpful tool, it shouldn't replace thoughtful, strategic decision-making, especially when it comes to people and compliance.

As a reminder, there is no liability or accountability for the AI if that guidance leads to noncompliance or lawsuits. When it comes to legal requirements, state nuances, or even culturally sensitive workplace policies, there’s no replacement for experienced HR professionals who understand how to interpret and apply laws accurately in context.

AI in Hiring - What New Laws Mean for Employers

AI is rapidly transforming hiring practices, especially where it’s being used to screen candidates, automate communications, and even analyze facial expressions in interviews. This growing reliance has sparked regulatory action, beginning with New York City’s first-in-the-nation law requiring annual bias audits of hiring tools, followed by federal guidance from the EEOC. Here are several states that have taken proactive steps to integrate AI regulations into their hiring and employment laws:

California (Effective Oct. 1, 2025)
Employers with 5+ employees must ensure AI tools used in hiring or evaluations don’t discriminate. This includes systems that assess skills, voice, or facial expressions. Anti-bias testing and accommodations may be required.

Colorado (Effective Feb. 1, 2026)
All employers using high-risk AI must have policies in place, complete regular risk assessments, notify people when AI is used, allow them to fix incorrect info, and report discrimination issues to the state within 90 days.

Illinois (Effective Jan. 1, 2020)
Employers using AI to analyze video interviews must tell applicants how it works, get their consent, and delete the video if requested. They must also track and report demographics of those impacted by AI decisions.

Maryland (Effective Oct. 1, 2020)
If using facial recognition in interviews, employers must get written consent from the applicant with clear details about the use.

New York City (Effective Jan. 1, 2023)
Employers must do annual bias audits of AI hiring tools, post the results online, give candidates 10 days’ notice, and allow them to request alternatives or accommodations.

At Rising Tide HR, we embrace AI as a powerful tool to enhance efficiency and streamline our work, but never as a substitute for professional expertise. Our team stays ahead of evolving regulations to make HR compliance easier, smarter, and stress-free. Reach out today to learn how Rising Tide HR can support your business.

Morgen Monie

Morgen Monie is a versatile leader with 15+ years of Human Resource and Leadership experience in technology and sales organizations. She thrives in highly innovative and complex organizations that value an outstanding employee experience. Morgen is passionate about diversity and equality in the workplace and has created dozens of programs supporting employees of a minority demographic.

https://www.risingtidehr.com
Next
Next

I-9 and E-Verify Compliance & Process Updates as of July 2025