December 4, 2025
Business Confidence Rebounds in November
Business confidence rebounded in November as a resilient economy muted employer concerns about everything from tariffs to the…
Read MorePosted on November 5, 2025
By Stacey Hyland
Stacey Hyland is the Employee Benefit Practice Leader at global insurance brokerage HUB International in New England.
There’s a groundswell of enthusiasm for artificial intelligence (AI) among Massachusetts policymakers and businesses that’s causing adoption to surge as a lever for boosting efficiency and productivity, among other benefits.
A state initiative is investing $100 million in AI as a way to spark business innovation, amid efforts like the Massachusetts AI Models Innovation Challenge. Businesses with a stake in it like MassMutual are pushing initiatives to encourage AI deployment while addressing its challenges.
A May survey found 43% of businesses in the Boston region are actively integrating AI into their operations. More broadly, a 2024 McKinsey survey found that 78% of respondents use AI in at least one function – up from 55% in 2023.
So, what can go wrong as AI goes mainstream? Plenty. And the potential risks – especially related to accuracy, cybersecurity and intellectual property infringement – are largely unaddressed. A formal AI risk strategy is only used by 20% of those using AI.
But developing an internal AI policy can go a long way toward countering the risks as it aligns the organization and its stakeholders around best practices for use of workplace AI tools. Here are some guidelines.
Understanding an AI policy framework
As AI is integrated into more work processes, a policy on its use is as important as policies concerning paid leaves, codes of conduct or workplace safety. It establishes guidelines that are instrumental to deciding whether to allow AI use at all, which platforms are permitted and by which departments and individuals. It also informs decisions related to privacy and security concerns.
An AI policy must be comprehensive, based on collaborative input from various key stakeholders. The policy committee should include human resources, legal/in-house counsel, finance/accounting, operations and IT. Other stakeholders should be included according to the enterprise’s industry. Think the HIPAA privacy officer for the healthcare concern and compliance and data privacy officers for financial services.
In addition to establishing relevant guidelines for each department, the committee should also develop a discovery process for AI tools in order to evaluate a vendor’s reputation, data handling practices and security measures.
Must-have elements of an AI policy
The policy must clearly define the rules for AI’s workplace use, such as:
Be your own end user
Horror stories abound about AI run amok, disseminating extremist content and disinformation. There’s no real way to ensure if the AI platforms being used were developed with inherent biases that could work against an organization’s intents, interests and values. Such issues can be amplified inadvertently through inherent biases AI tools “learn” from human input.
The AI policy should emphasize the need to regularly pressure test AI tools. For HR’s recruiting efforts, for example, staff should role-play as would-be job candidates, testing the application tracking system by changing their names to infer different ethnicities; the resume to infer years of service, or age; addresses to show different geographic locations. See how such adjustments affect the system’s results.
Communicating the AI policy
Developing a comprehensive AI policy is only part of the task at hand. It must be communicated effectively to ensure everyone understands what responsible and ethical use of AI tools looks like. Among the best practices:
As AI continues to evolve, being proactive on policy development will be essential for staying of the curve. It’s important to take an inclusive approach that involves key stakeholders, is built around clearly-defined policy elements and communicates its provisions organization-wide. That’s how to mitigate risks, promote transparency and foster trust in AI practices.