Artificial intelligence (AI) notetakers are increasingly integrated into workplace meetings, offering automated transcription, summaries, and improved documentation. Tools like Otter.ai, Fireflies.ai, and Microsoft Copilot promise to enhance efficiency, but their use raises significant concerns regarding privacy, data security, and regulatory compliance.
For organizations handling sensitive data—such as student records protected under FERPA, personally identifiable information (PII) subject to GDPR and CCPA, or proprietary business strategies—AI notetakers introduce potential risks that must be carefully managed. At Fordham University, IT security policies provide a structured approach to mitigating these risks while maintaining the benefits of AI-assisted transcription.
Key Privacy and Risk Concerns
While AI-powered transcription tools offer convenience, their use must align with institutional data protection, compliance, and IT risk management policies. Fordham University has issued policies, guidelines, and advisories for AI notetaker usage. Below are key concerns and applicable policies:
Lack of Informed Consent and Transparency
Many AI notetakers automatically record meetings without explicitly seeking consent from all participants. Some platforms display notifications, but others depend on meeting hosts to inform attendees—leading to inconsistent application of consent requirements.
Under Fordham University’s Etiquette and Considerations for Using AI Notetaker/Recording Tools in Zoom and Microsoft Teams Advisory, users are required to:
- Inform all participants before enabling AI notetaking or recording.
- Obtain explicit consent from all meeting attendees before proceeding with transcription.
- Allow participants to opt out if they do not consent to be recorded.
Failure to follow these steps could result in non-compliance with privacy laws such as GDPR, CCPA, and FERPA, which require clear disclosure and consent when recording personal information.
Data Retention and Storage Risks
Where is the data stored once a transcript is generated, and how long is it retained? Many AI notetakers store transcripts and audio recordings in the cloud, increasing the risk of cyberattacks or unauthorized access. According to Fordham’s Data Classification Guidelines and Data Retention and Disposal Policy, members of the University must:
- Classify meeting data appropriately—transcripts containing PII, student data, or sensitive business discussions must be protected under Fordham Protected Data or Fordham Sensitive Data classifications.
- Limit data retention—organizations should ensure AI-generated transcripts are not stored indefinitely and are deleted when no longer necessary.
- Secure data storage—any stored transcripts must follow encryption and access control requirements outlined in Fordham’s IT security policies.
AI Misinterpretation and Compliance Risks
AI transcription is not perfect—errors in speech recognition, accents, and technical jargon can result in misattributed or inaccurate records. Incorrect transcripts introduce liability issues if meeting summaries are used for compliance reporting, legal disputes, or decision-making. To address this risk, Fordham University recommends that AI-generated transcripts:
- Be manually reviewed before being stored or shared to ensure accuracy.
- Not be used in place of formal legal or contractual records unless verified.
- Follow Fordham’s Acceptable Use Policy, ensuring AI tools do not violate institutional or legal confidentiality requirements.
Vendor Risk Management Considerations
Using an AI notetaker means sharing data with a third-party vendor, which introduces data security, compliance, and contract enforcement risks. Organizations should conduct due diligence to determine the following:
- Does the vendor meet security standards? Request a SOC 2 Type II report and assess security controls.
- Does the vendor use meeting data for AI training? Some AI transcription tools retain and process data to improve their models, potentially violating privacy policies.
- Are there compensating controls in place? Additional security measures (e.g., regular audits, restricted access) must be implemented if a vendor lacks full compliance.
Fordham University’s Third-Party Risk Management Policy requires all vendors handling sensitive or regulated data to undergo a risk assessment and comply with institutional security policies.
Best Practices for Responsible AI Notetaker Use
To align AI notetaker usage with privacy and IT security policies, organizations should adopt the following best practices:
- Require explicit consent – Always notify and obtain approval from participants before enabling AI transcription.
- Limit data retention – Configure automatic deletion of transcripts per Fordham’s Data Retention and Disposal Policy.
- Restrict AI access – Do not use AI notetakers for legal, HR, or student records (FERPA-protected) discussions.
- Assess vendor security – Verify compliance with SOC 2 Type II, GDPR, and other relevant standards.
- Train employees on AI risks – Educate staff on privacy, security, and best practices when using AI-powered transcription tools.
Final Thoughts
AI notetakers offer convenience, but their risks cannot be ignored. Organizations that use these tools must prioritize data security, informed consent, and compliance with institutional policies to prevent privacy violations and reputational harm.
By following Fordham’s IT policies and advisories, organizations can leverage AI’s benefits responsibly while ensuring that sensitive meeting data remains secure.
Before enabling AI transcription in meetings, the question is, “Is it useful?” but “Is it compliant?”