Blog & News

Back to Posts

Groundswell of AI Deployment Underscores Need for Risk Policy

Posted on November 5, 2025

By Stacey Hyland

Stacey Hyland is the Employee Benefit Practice Leader at global insurance brokerage HUB International in New England. 

There’s a groundswell of enthusiasm for artificial intelligence (AI) among Massachusetts policymakers and businesses that’s causing adoption to surge as a lever for boosting efficiency and productivity, among other benefits.

A state initiative is investing $100 million in AI as a way to spark business innovation, amid efforts like the Massachusetts AI Models Innovation Challenge. Businesses with a stake in it like MassMutual are pushing initiatives to encourage AI deployment while addressing its challenges.

A May survey found 43% of businesses in the Boston region are actively integrating AI into their operations. More broadly, a 2024 McKinsey survey found that 78% of respondents use AI in at least one function – up from 55% in 2023.

So, what can go wrong as AI goes mainstream? Plenty. And the potential risks – especially related to accuracy, cybersecurity and intellectual property infringement – are largely unaddressed. A formal AI risk strategy is only used by 20% of those using AI.

But developing an internal AI policy can go a long way toward countering the risks as it aligns the organization and its stakeholders around best practices for use of workplace AI tools. Here are some guidelines.

Understanding an AI policy framework

As AI is integrated into more work processes, a policy on its use is as important as policies concerning paid leaves, codes of conduct or workplace safety. It establishes guidelines that are instrumental to deciding whether to allow AI use at all, which platforms are permitted and by which departments and individuals. It also informs decisions related to privacy and security concerns.

An AI policy must be comprehensive, based on collaborative input from various key stakeholders. The policy committee should include human resources, legal/in-house counsel, finance/accounting, operations and IT. Other stakeholders should be included according to the enterprise’s industry. Think the HIPAA privacy officer for the healthcare concern and compliance and data privacy officers for financial services.

In addition to establishing relevant guidelines for each department, the committee should also  develop a discovery process for AI tools in order to evaluate a vendor’s reputation, data handling practices and security measures.

Must-have elements of an AI policy

The policy must clearly define the rules for AI’s workplace use, such as:

  • Prohibited uses, or specifying which types of data must never be shared on AI platforms. This includes personally identifiable information (PHI), protected health information (PHI) and proprietary information, like trade secrets.
  • Permitted uses, outlining when AI is allowed, like general research, writing sample documents and skill development.
  • Mandatory safeguards are fact-checking protocols to protect against errors and misinformation.
  • Required notices and disclosures ensure employees communicate their AI use, for what purpose, on what platform, and the results. These also include specifying which AI tools are used in developing work product.

Be your own end user

Horror stories abound about AI run amok, disseminating extremist content and disinformation. There’s no real way to ensure if the AI platforms being used were developed with inherent biases that could work against an organization’s intents, interests and values. Such issues can be amplified inadvertently through inherent biases AI tools “learn” from human input.

The AI policy should emphasize the need to regularly pressure test AI tools. For HR’s recruiting efforts, for example, staff should role-play as would-be job candidates, testing the application tracking system by changing their names to infer different ethnicities; the resume to infer years of service, or age; addresses to show different geographic locations. See how such adjustments affect the system’s results.

Communicating the AI policy

Developing a comprehensive AI policy is only part of the task at hand. It must be communicated effectively to ensure everyone understands what responsible and ethical use of AI tools looks like. Among the best practices:

  • Publish the policy widely within the organization for adequate access and exposure. Include it in the employee handbook and IT policy documents and, of course, as a stand-alone document.
  • Develop engaging content explaining AI policy. A series of videos or video clips can explain why the policy is important and key aspects in an easily understood and user-friendly way.
  • Use multiple channels to get the policy out to employees. Team meetings, one-on-on manager meetings and town halls are effective venues. Also use of digital platforms like Microsoft Teams and company intranets boost visibility and accessibility.

As AI continues to evolve, being proactive on policy development will be essential for staying of the curve. It’s important to take an inclusive approach that involves key stakeholders, is built around clearly-defined policy elements and communicates its provisions organization-wide. That’s how to mitigate risks, promote transparency and foster trust in AI practices.