AnalyticsHigher EducationInformation Technology

Onramps to AI for Higher Ed


By Camille Crittenden, CITRIS and the Banatao Institute, University of California

Tools and platforms endowed with artificial intelligence (AI) have seeped into many aspects of daily life. From recommendation engines that drive Amazon and Netflix to the latest advances in Large Language Models underpinning platforms like OpenAI and Gemini, AI has brought efficiencies to the home and workplace, along with considerable anxiety about its implications for the future.

Institutions of higher education are facing the same exciting but nerve-wracking questions that have emerged in other sectors. On the one hand, AI can help teaching faculty create better lesson plans and assessment instruments, researchers accelerate their investigations in a range of disciplines and administrators improve operations. At the same time, concerns have emerged about academic integrity, data security, and upskilling in the workforce, among other areas affected by the rapidly evolving field of AI. Institutional leaders must design flexible policies to ensure applications are beneficial, effective and safe.

Of course, the primary mission of institutions of higher education is to…educate. Many universities are establishing fundamental courses to improve AI literacy, not only among the students but also in their current workforce.

The opportunities for creating operational efficiencies and, indeed, entirely new capabilities can be seen across a variety of university offices. In enrollment management, AI tools can predict student yield, optimize recruitment strategies and personalize outreach. It can support students as they navigate their academic careers by offering automated guidance on financial aid, course registration, library resources, and IT support. It can flag those students who may need extra academic help or who may be struggling to adjust if they’ve arrived on campus for the first time. AI can help finance offices create better predictive models to manage energy consumption, campus maintenance and procurement. In HR, AI can help with performance management and workforce planning. Across all of these applications, however, those implementing such tools must be attuned to the possibility of bias in the training data or algorithmic models that could affect service delivery or have real consequences for those subject to the results.

To control for and attempt to mitigate some of these risks, universities have employed a variety of strategies, including creating Responsible AI Principles, establishing AI governance committees, and expanding roles for existing staff in ethics and compliance, technology policy, privacy, procurement, and legal counsel. The job scope for CIOs and CTOs must now include attention to the responsible implementation of AI tools and new concerns for risk management. Faculty also have a voice in the areas where AI affects research, teaching and learning. While some have embraced the new pedagogical opportunities of AI, others are more wary about the effect on student learning and on the identity of the institution of higher education overall.

Key among the concerns for administrators and faculty alike are how best to ensure fairness and reduce bias when using these tools. Online programs that profess to detect AI cheating are notoriously flawed, returning many false positives and likely overlooking false negatives. Some automated proctoring services that use facial recognition are not trained well for non-Caucasian faces. Accusations of cheating can erode trust in the classroom and in the academic enterprise more broadly. Third-party audits for AI tools can sometimes verify their accuracy, but this is an emerging field as well, with many new and questionably qualified entrants.

As nonprofit organizations, many funded by public tax dollars, universities have a responsibility to make their decisions transparent and explainable–characteristics that many AI programs lack. Universities can build trust with their constituencies, whether students, faculty, or staff, by establishing criteria for vendors providing products or services that require the highest level of explainability and reduce reliance on “black box” systems.

At the same time, universities must uphold the highest levels of privacy protection for the vast and varied data sets they maintain. Student data is protected by law under the Family Educational Rights and Privacy Act (FERPA); other kinds of personally identifiable information (PII) are protected under the General Data Protection Regulation (GDPR), implemented in Europe and other state-level privacy protection laws that would apply to university records and research. Universities are ripe targets for cybersecurity attacks because of their troves of data. CIOs and CTOs must establish appropriate data governance policies to protect these important digital assets that have been made more vulnerable by a proliferation of AI-driven phishing attempts and other incursions.

Of course, the primary mission of institutions of higher education is to…educate. Many universities are establishing fundamental courses to improve AI literacy, not only among the students but also in their current workforce. Even those Higher Ed leaders outside the IT organization should have an understanding of how AI could help improve the efficiency and effectiveness of their work, in everything from writing letters of recommendation or marketing copy to creating data visualizations for their research findings. Concerns about AI replacing human jobs are not entirely unfounded; still, those employees who know how to harness the power of AI will become more productive and competitive in the marketplace for talent.

Many organizations are considering their approaches and priorities with regard to enterprise-wide AI implementation. Universities and their IT and academic leaders have an opportunity to lead the way by foregrounding questions of safety, ethics, fairness and equitable access. Together with corporate, government and nonprofit partners, Higher Ed institutions can foster a thriving and creative environment for AI and humans to flourish together.