The Cal State Student Association (CSSA), as a student-led organization, strives to improve the lives of California State University (CSU) students by advocating for student needs and engaging students in systemwide, state, and federal higher education policymaking.

The California State University has entered into a contract with OpenAI for ChatGPT Edu, providing nearly 470,000 CSU students and approximately 63,000 faculty and staff with access to the premium version of the platform. The agreement costs the CSU system approximately $17 million and spans 18 months, running from February 2025 through July 2026.

Throughout this document, all referenced data, unless otherwise noted, are drawn from the 2025 Systemwide AI Survey conducted by San Diego State University (https://aisurvey.sdsu.edu/dashboard/). The terms “generally agree” and “generally disagree” refer to the combined responses across the categories of “Strongly Agree/Disagree,” “Agree/Disagree,” and “Somewhat Agree/Disagree.” The following line items are presented without regard to order of importance.

The content herein reflects discussions and input from the CSSA Systemwide Affairs Committee, comprising of the twenty-three student representatives from each CSU campus, as well as the ongoing work of the Vice President of Systemwide Affairs, Katie Karroum, on the contract between the CSU and OpenAI for ChatGPT Edu. Vice President Karroum serves as the student representative on behalf of all CSU students in systemwide deliberations and decision-making processes, including participation in the planning and implementation of the initiative.

1. Equity, Access, and Implementation

While the CSU’s ChatGPT Edu launch represents a historic step in expanding digital access, students emphasize that access alone is not equity. Students recognize that a systemwide rollout helps prevent a digital divide between well-resourced and under-resourced campuses. At the same time, survey data indicate broad agreement on AI’s potential benefits, with 59.8% of students and 72.2% of faculty generally agreeing that AI technology can enhance creativity and innovation (SDSU). However, students also stress that implementation without meaningful student input risks deepening inequities in understanding, usage, and trust. Across campuses, students report that many peers are unaware that ChatGPT Edu is available to them or unclear about how to use it effectively. Students are raising concerns about faculty transparency, noting that new policies are being created to require professors to declare in their syllabi whether AI use is permitted, which is an important step toward consistency, but one that also reveals the fragmented rollout of this initiative. Through various discussions, students have warned that AI engagement (not usage) remains extremely low among students, as awareness and training lag far behind activation numbers. Students describe the implementation as rushed, top-down, and lacking transitional support, leaving many to “catch up” with a technology that has already reshaped classroom expectations.

2. Academic Integrity, Grading, and Classroom Consistency

One of the most repeated themes among students is the absence of a consistent, transparent classroom policy on AI use. This inconsistency is reflected in student survey data, where 61.4% of students generally disagree that their professors encourage the use of AI in coursework, despite 64% of students generally agreeing that AI has positively affected their learning at the university. Students described situations where some professors encourage AI literacy while others penalize any perceived use of it, creating confusion, fear, and mistrust. This contradiction, students recognizing learning benefits from AI while receiving little to no consistent instructional support, highlights the misalignment between faculty practices and students’ experiences and outcomes. Students believe that the conflicting faculty approaches have left students unsure what constitutes “acceptable use.” That uncertainty is further reinforced by the fact that 66.8% of students generally disagree that their professors teach them how to use AI effectively, suggesting that discouragement often occurs without guidance. Faculty reliance on unreliable AI-detection tools has also led to mass accusations, in some cases, against dozens of students at once, damaging academic credibility and prolonging time to graduation. As a result, stigma persists: 23.8% of students strongly agree that they would feel embarrassed if someone found out they used AI for schoolwork, indicating that fear and shame continue to shape student behavior even as AI becomes more prevalent. Students have noted that their campuses are working on AI Integrity Policies to address false accusations and to ensure due process for students accused of AI plagiarism. This student anxiety contrasts sharply with faculty responses, as the most common faculty reaction (30.6%) was disagreement with the statement that they would feel embarrassed using AI for a job-related task, highlighting a cultural gap between how students and faculty perceive acceptable AI use. Across the system, however, no unified standards exist, leaving students to navigate an inconsistent academic environment that can vary not just by campus, but by classroom. Students urge the CSU to issue clear systemwide guidance defining ethical use, academic honesty, and faculty responsibility. Without this, AI in the classroom risks eroding rather than enhancing the learning experience and critical-thinking skills that higher education seeks to build.

3. Privacy, Data Security, and Transparency

Student concern about AI is deeply rooted in fears over personal privacy, with 83.5% of students generally agreeing that they worry about AI’s impact on personal data. Students have expressed widespread uncertainty about the data privacy and governance of the CSU’s AI partnership. Many question if the Chancellor’s office or individual campuses monitor the data compliance, what happens to student data after the CSU OpenAI contract expires, and if conversations, assignments, or user behavior data are being stored or shared? Despite the CSU’s intent to provide a secure, institutionally licensed platform, students report preferring the free version of ChatGPT over ChatGPT Edu, not out of convenience, but out of fear that their activity could be monitored. This behavior reflects a broader trust gap, where the presence of an official systemwide tool has not translated into confidence about how student data is handled. Additionally, student leaders across the CSU question the funding transparency behind the $17 million deal with OpenAI, asking whether these funds are derived from student tuition, state appropriations, or reallocated CSU resources. These fiscal concerns are intensified by the fact that many students observed the rollout of AI funding alongside a trend of intensive campus and system budget cuts, with the system facing a $2.3 billion budget gap, reinforcing perceptions that institutional priorities were set without sufficient student input. Through various discussions with student leaders, a majority expressed frustration that the Cal State Student Association was not consulted or notified prior to the finalization and publicizing of the contract agreement, despite CSSA serving on systemwide committees and having direct lines of communication with the Chancellor’s Office. The high level of privacy concern, lack of governance clarity, and absence of stakeholder consultation convey a disconnect between system-level decision-making and student buy-in. Students collectively call for plain-language transparency and involvement in all AI-related contracts and a clear explanation of the fiscal, ethical, and data-handling implications for every campus.

4. Sustainability and Environmental Responsibility

Across campuses, students repeatedly identified environmental sustainability as one of their most pressing concerns. Students feel excluded from conversations about the environmental cost of AI, including the large amounts of energy and water required to train
and run AI systems. Students are emphasizing the disconnect between CSU’s Climate Action Plan and its AI adoption. According to a recent student-led CSU Campus survey, 56% of student respondents opposed the CSU AI integration, citing environmental harm as a top reason. Students question how AI implementation aligns with CSU’s commitments to carbon neutrality and sustainable infrastructure, and they are demanding that the CSU publish a systemwide report detailing the carbon, energy, and water footprint of its AI operations. Students also expressed frustration that the CSU has yet to establish a clear framework for balancing innovation with sustainability, noting that the CSU’s investments in high-resource digital tools must not contradict its climate goals.

5. Mental Health, Ethics, and Accountability

Students are voicing serious concerns regarding the ethical and psychological dimensions of AI use amongst students. Across multiple discussions, students question how the CSU will protect students who disclose mental-health concerns to AI tools or inadvertently receive inappropriate or unsafe responses. They asked whether there are effective mechanisms for reporting harmful content, and how the CSU can respond to
potential risks without violating user privacy; students are highlighting the urgent need for oversight mechanisms that prioritize user well-being. Students also warned of potential liability gaps, particularly around the misuse of AI for non-consensual or harmful content, and urged the CSU to clearly outline responsibilities and protections for both students and faculty that also ensure the privacy and confidentiality of the user experience. A pronounced gap exists between students and faculty in how the validity and reliability of AI-generated responses are understood. While 80.2% of students generally disagree that they are comfortable submitting an AI-generated answer as their own work, 72.1% of faculty generally agree that they are comfortable doing so, revealing fundamentally different assumptions about accuracy and acceptable reliance on AI outputs. These data points intersect with the growing fear from students that overreliance on AI could erode critical thinking. Students stress that AI should supplement human insight, not replace it, and that CSU must commit to integrating ethical literacy and critical-thinking education alongside any AI rollout.

6. Governance, Oversight, and Student Inclusion

Student leaders across the CSU system are unified in calling for greater inclusion, oversight, and shared governance in the CSU AI Initiative. An overwhelming number of students believe that student voices have been largely absent from decision-making. Despite the establishment of systemwide committees such as the Generative AI Advisory Committee and the AI Workforce Acceleration Board, students remain concerned that these bodies operate without consistent student participation or transparency, not acting as the sole decision-making authority over the implementation of the initiative. Accordingly, CSSA’s involvement in all future CSU AI decisions must occur and must be consistent, with clearly defined roles in consultation, review, and approval. Any systemwide AI initiative advanced without formal CSSA engagement undermines shared governance and repeats the exclusion students have already identified in the original rollout. Students emphasize that campus administrators often lack awareness of student sentiment. CSSA has gathered that a number of student leaders across the CSU are already conducting their own surveys to gauge how students feel about the resource. Students are asking the CSU to adopt a structured feedback mechanism, publicize campus-level data on AI usage and outcomes, and regularly engage CSSA in all decision-making throughout the implementation of ChatGPT Edu. Without such accountability, the CSU risks creating a system where innovation moves faster than inclusion.

The CSU’s AI Initiative has the potential to meaningfully transform higher education, but only if it is implemented with transparency, ethical accountability, environmental responsibility, and sustained student partnership. CSSA recognizes the importance of the CSU system, the nation’s largest four-year public university system, embracing artificial intelligence to support students’ futures and career pathways. However, any integration of new technologies like AI into the learning environment must be carried out in an ethical and transparent manner. The governing membership of the nearly 470,000 CSU students, the Cal State Student Association, affirms that future implementations must be accompanied by shared governance and responsible oversight to ensure student privacy, equity, and institutional trust.

Sincerely,

Katie Karroum
Vice President of Systemwide Aff airs

White Paper PDF