Artificial IntelligenceHigher Education

Balancing Innovation and Privacy: Rethinking AI Use on Campus

By Hazem Farra, Professor of Information Systems at St. Cloud State University

Why Guidance Matters

Over the last two years, I have been questioning what role AI should play in my classroom. After reading, observing, and attending sessions, I finally leaned into it. Last spring, I assigned a semester-long business case involving a bakery startup simulation. From the first project, students chose their preferred tools. Out of twenty-three, nineteen used ChatGPT. By mid-semester, some had shifted. Six moved to Copilot. Four tried Gemini.

In Co-Intelligence, Ethan Mollick outlines AI’s evolving role as co-author, tutor, coach, creative, and even colleague. These metaphors resonate with many of us in Education. But what’s missing is a serious conversation about privacy. What happens when student-generated prompts become part of someone else’s model? If AI is going to support learning, then protecting trust through privacy must be the first principle, not the last concern.

AI is not going away. It has the potential to streamline feedback, reduce grading overload, and spark creativity. But it also presents serious risks to data privacy, intellectual property, and academic integrity.

Where We Stand Now

Campus IT has confirmed that Copilot is permitted when accessed through university-issued accounts. That distinction is critical because enterprise versions of Copilot are subject to different privacy terms and are exempt from using prompts as training data. The concern is that students, unaware of the difference, may use personal Microsoft accounts or remain anonymous, which puts them outside of the campus compliance umbrella.

Zoom AI was approved for campus use, while tools like Read.ai and Fireflies.ai were explicitly blocked. A weekly newsletter included a brief notice of the update. But no official instruction or direction has been given. These actions lack visibility across departments and do little to promote informed or ethical use of AI on campus.

Only 14 percent of universities currently operate a board or committee tasked with reviewing AI tools before they are introduced into academic settings (CXOTech, February 2025). The rest continue adopting tools informally, often without shared governance or clear policy.

Why the Models Want Your Data

Large language models improve their responses by incorporating new inputs from users. In educational settings, that input includes student essays, assignment prompts, drafts, and project work. A March 2024 study (Large Language Models for Education: A Survey and Outlook) found that 42 percent of education datasets available to AI developers include personally identifiable information (PII).

The better these models get, the more they are shaped by private and potentially sensitive data. When a student pastes a business plan or IT security proposal into ChatGPT Plus, those tokens are stored by default unless the user navigates to a settings panel and disables data sharing. Most students don’t know this. I wouldn’t have either if I hadn’t started researching data ingestion practices of generative AI tools.

Student Work in the Wild: AI, Privacy, and Risk

While current AI projects are kept inside our LMS, the same cannot be said for older assignments. A simple public search turned up several lab reports with my name, course title, and student identifiers, unredacted, uploaded to CourseHero. These documents are now indexed and used by the platform’s AI assistant to generate solutions.

CourseHero and similar platforms incentivize students to gain access by uploading solved assignments. This practice often leads to the unintended sharing of academic materials, often including names, university details, and private IDs. It also means those files can be used to train tools that may later misguide other students with outdated or incorrect information.

Worse, as this content accumulates, external tools can begin to infer how assignments are structured and graded. Students looking for shortcuts often rely on these platforms, even though the answers may be outdated or wrong. Meanwhile, the integrity of their course ideas and their own materials are no longer within the instructors’ control.

Over time, the steady appearance of unredacted coursework online can also quietly shape how a university is perceived. It suggests a lack of content oversight and may raise concerns about academic credibility among students, future employers, and accreditors.

Guardrails for Responsible Use

  • Establish strong contractual boundaries. Institutions should only license AI tools that guarantee no student prompts will be used for training.
  • Make opt-out the default. Students and faculty must be shown clearly where the “improve the model” option is located. Ideally, institutions should invest in a version that disables it altogether.
  • Rework the assignment design. Include short reflections where students explain how they used AI, what they revised, and how their understanding shaped the final product.
  • Protect instructor content. Redact names, course codes, and identifiable data from materials. Instructors and departments should submit takedown requests when legacy content appears online.
  • Form a privacy and compliance committee. This board should include faculty, IT security, legal counsel, and student representation. No AI tool should be deployed in coursework without review and consensus.
  • Integrate privacy education into classrooms. First-year courses are the ideal place to begin. Optional workshops and club events are not sufficient. Privacy awareness must be embedded into the curriculum early and meaningfully.

Closing Call
AI is not going away. It has the potential to streamline feedback, reduce grading overload, and spark creativity. But it also presents serious risks to data privacy, intellectual property, and academic integrity.

Universities cannot afford to treat AI adoption as a race. If we prioritize clarity, caution, and collaboration, AI can still serve as a valuable partner in education. But without guardrails, the trust that holds our learning environments together could begin to fade quietly, then rapidly unravel.


About the Author
Hazem Farra teaches Management Information Systems at St. Cloud State University, focusing on programming, IT and cloud infrastructure, web server management, and security and risk management. He holds an MS in Information Assurance and brings experience in software engineering and database design and development. His teaching emphasizes hands-on labs and real-world examples to build job-relevant skills and engages students in critical topics such as AI, privacy, security, and ethics in information systems.

References
Mollick, Ethan. Co-Intelligence: Living and Working with AI. Little, Brown Spark, 2024.
“Onramps to AI for Higher Ed.” CXOTech Magazine, February 2025. https://cxotechmagazine.com/onramps-to-ai-for-higher-ed/
Chiang, Chien-Chin et al. Large Language Models for Education: A Survey and Outlook. arXiv, March 2024. https://arxiv.org/pdf/2403.18105.pdf