Survey evidence from 148 Chinese engineering students shows widespread GenAI use: 20.95% daily and 41.89% multiple times per week.
Briefing
This paper asks how generative artificial intelligence (GenAI) is currently being used by engineering students in China and what students perceive as its effects on learning experience, learning behaviors, and academic performance—along with the main challenges and expectations for future integration into engineering education. The question matters because GenAI tools (e.g., ChatGPT and Chinese-language equivalents) are rapidly entering higher education, potentially reshaping how students search for information, complete assignments, and develop skills such as independent thinking and creativity. Engineering education is especially sensitive to these changes because it depends on correctness, domain knowledge, and the ability to reason through complex technical problems; inaccurate outputs could mislead students and propagate errors into downstream work.
The study uses an anonymized questionnaire survey design. The sample consists of 148 engineering students recruited using a combination of purposeful sampling (to include multiple institution types and engineering disciplines) and convenience sampling (to access an available student population). Participants include 72 undergraduates and 76 postgraduates, with 94 male (63.51%) and 54 female (36.49%). Geographically, respondents are distributed across North/Northeast China (26.4%), Central China (39.1%), Southwest/Northwest China (14.8%), and South/Southeast China (19.7%). Engineering disciplines include computer-related fields (32.4%), civil engineering (17.6%), mechanical engineering (16.9%), electrical engineering (6.8%), and other disciplines (26.3%). Data were collected between October 9, 2024 and November 1, 2024 through both offline and online participation, after ethics approval from Xi’an Jiaotong-Liverpool University (approved September 21, 2024) and informed consent.
Instrument-wise, the questionnaire contains 21 scale-based items grouped into five variables: (1) frequency of GenAI use across scenarios, (2) preference for using GenAI for complex problem-solving, (3) impact on individual learning capabilities, (4) challenges and concerns, and (5) perspectives on future applications. It also includes 7 multiple-choice questions and 1 open-ended question. For psychometric quality, the authors report strong internal consistency and factor-analysis suitability: Cronbach’s values exceed 0.8 for all five variables, with an overall of 0.879. For validity, they report a Kaiser-Meyer-Olkin (KMO) value of 0.867 and Bartlett’s test of sphericity with -square = 1283.052, df = 210, and p < 0.001, supporting the appropriateness of factor analysis.
Key findings are primarily descriptive and based on students’ self-reported perceptions.
First, GenAI adoption is widespread. Among tools, ChatGPT is the most used (77.03% of respondents). Wenxin Yiyan (Baidu) is used by 41.89%, while DeepL (25.68%), Microsoft Bing (20.27%), and Google Bard (18.92%) also have notable usage. Less-used tools (e.g., DALL E, Canva AI, Adobe Firefly) are each around or below 10%. In terms of frequency, 20.95% report daily use and 41.89% report using GenAI multiple times per week; only 4.73% report never using it. The authors note subgroup differences: postgraduates report higher regular use (40%) than undergraduates (25.64%), and computer-related students use GenAI more frequently than students in other engineering disciplines.
Second, students report using GenAI for a variety of academic tasks. The most common scenario is “finding learning resources or concept explanations” (55.41%). Other frequent uses include “compiling reports or documents” (54.73%) and “data analysis” (52.03%). For assignment-related work, 32.43% use GenAI frequently and 41.22% occasionally. For basic engineering-related academic tasks, 50.68% use GenAI for literature search and review, and 47.97% use it to generate initial ideas for design assignments or research projects.
Third, students perceive positive effects on learning efficiency and engagement, but mixed effects on academic performance and more nuanced effects on thinking skills. For learning efficiency, 36.49% report “significant improvement” and 52.03% report “improved,” totaling 88.52% reporting improved efficiency; only 10.81% report “almost no change,” and 0.68% report reduced efficiency.
For learning initiative (active learning), 64.19% report improved initiative (41.22% “improved” and 22.97% “significantly improved”). However, 6.76% report a decline in initiative, which the authors interpret as a potential over-reliance effect that could undermine autonomy and intrinsic motivation.
For independent thinking, the distribution is more mixed: 47.97% report improvement (23.65% “improved” and 24.32% “significantly improved”), 34.46% report “almost no change,” and 14.86% plus 2.70% report weakened or significantly weakened independent thinking. Creativity shows a similar pattern: 58.78% report positive effects (35.81% “improved” and 22.97% “significantly improved”), 29.73% report almost no change, and 11.48% report negative effects (8.78% reduction and 2.70% significant reduction). The authors suggest that creativity gains may depend on whether GenAI is used for idea generation versus merely executing formulaic solutions.
For academic performance, students’ perceptions diverge from their efficiency and engagement reports. Nearly half of respondents do not feel GenAI improved their academic performance. Still, some report benefits: 12.2% report “improved” and 36.5% report “slightly improved” academic outcomes (as stated in the discussion/conclusion). The authors attribute the discrepancy to concerns about GenAI’s accuracy and domain-specific reliability, and to the possibility that faster task completion does not necessarily reflect deeper understanding.
Fourth, students identify concrete challenges. The most prominent is inaccuracy of generated content, reported by 62.16% as a key challenge. When asked about frequency of encountering inaccurate outputs, 43.24% report often or very often, and 37% report occasionally. Over-reliance is the second major concern (39.86%). Usability difficulties are reported by 20.27% (e.g., interfaces difficult for beginners and insufficient technical support, especially for newer tools). Ethical concerns and privacy issues are also present: 14.19% mention ethical/privacy concerns as barriers, and 17.57% cite high costs. On ethics specifically, 35.14% rate ethical issues as “important” and 22.97% as “very important,” while 40.54% rate ethics as “average importance” and only 1.36% dismiss it. On data privacy satisfaction, 54.05% report “average satisfaction,” 25% “satisfied,” 12.84% “very satisfied,” and 5.41% dissatisfied plus 2.7% very dissatisfied.
Fifth, expectations for integration are strongly supportive but cautious. Regarding integration level, 20.95% support full integration into teaching, 43.92% prefer partial integration for certain courses/scenarios, and 30.41% view it as helpful but not necessary. For training, 43.24% and 49.32% expect institutions to provide basic introductory AI courses and in-depth practical training, respectively, with nearly half preferring online training. For regulation, 55.41% want clear usage guidelines and 47.3% want tailored policies by curriculum; 46.62% want training for both students and faculty. A small minority (10.14%) supports a complete ban.
Finally, students express optimism about GenAI’s future in education, with 29.05% describing prospects as “broad” and 39.86% as “very broad,” while 27.7% are neutral and only 3.38% negative (“narrow” or “very narrow”). They also specify desired improvements: over 60% emphasize higher accuracy for discipline-specific problems, better literature search, and integration with professional software; 54.73% want deeper insights into professional questions; 40.54% call for improved data-processing abilities.
Limitations are acknowledged by the authors and are also apparent from the methodology. The study relies on self-reported data, which can introduce social desirability bias and miscalibration between perceived and actual learning gains (“perception doesn’t always align with actual outcomes”). The design is cross-sectional and does not examine long-term effects of GenAI adoption on learning trajectories, skill development, or career readiness. The sampling strategy, while designed to include diversity, uses convenience sampling and may not represent all engineering students, particularly those in resource-limited institutions or with slower technology adoption. The authors also note that discipline, region, and education level likely influence usage patterns, and that future work should use stratified random sampling, longitudinal designs, and objective data sources (test scores, homework, teacher/peer evaluations, and platform log data).
Practically, the results suggest that engineering educators and institutions should treat GenAI as a learning aid rather than a replacement for learning processes. Students themselves favor guidelines and training over prohibition, but they strongly emphasize accuracy, domain reliability, ethics, and privacy. Therefore, institutions should develop discipline-aware policies, provide instruction on responsible use and verification practices, and design assignments that reward critical evaluation and conceptual understanding rather than mere output generation. Who should care includes engineering faculty, curriculum designers, university administrators responsible for academic integrity and student support, and tool developers who need to improve reliability and usability for engineering-specific tasks.
Cornell Notes
Using a questionnaire of 148 Chinese engineering students, the study documents how often and for what purposes students use generative AI (especially ChatGPT), and how they perceive its effects on efficiency, initiative, independent thinking, creativity, and academic performance. Students report strong gains in efficiency and engagement but mixed effects on independent thinking and creativity, and many report no improvement in academic performance—primarily due to concerns about accuracy, over-reliance, and ethics/privacy.
What research questions does the paper address?
How frequently and in what ways engineering students in China use generative AI; how it affects their learning experience; and what challenges and expectations they have for integrating it into engineering education.
What study design and data collection method were used?
An anonymized questionnaire survey administered between October 9 and November 1, 2024, using both offline and online participation.
Who participated and how large was the sample?
148 engineering students: 72 undergraduates and 76 postgraduates, from multiple regions and engineering disciplines in China.
How did the authors assess questionnaire reliability and validity?
They report Cronbach’s overall = 0.879 (and > 0.8 per variable), KMO = 0.867, and Bartlett’s test with p < 0.001 to support factor-analysis suitability.
Which GenAI tools were most popular among students?
ChatGPT (77.03%) was most used, followed by Wenxin Yiyan (41.89%), DeepL (25.68%), Microsoft Bing (20.27%), and Google Bard (18.92%).
How often do students use generative AI?
20.95% use it daily and 41.89% use it multiple times per week; only 4.73% report never using it.
What are the most common academic use scenarios?
Finding learning resources/concept explanations (55.41%), compiling reports/documents (54.73%), and data analysis (52.03%).
What did students report about learning efficiency and initiative?
Learning efficiency improved for 88.52% (36.49% significant improvement; 52.03% improved). Learning initiative improved for 64.19% (41.22% improved; 22.97% significantly improved).
How did GenAI affect independent thinking, creativity, and academic performance?
Independent thinking: 47.97% improved, 34.46% almost no change, and 17.56% weakened (14.86% weakened; 2.70% significantly weakened). Creativity: 58.78% positive, 29.73% almost no change, and 11.48% negative. Academic performance: nearly half reported no improvement; only 12.2% reported improved and 36.5% slightly improved outcomes.
What challenges and expectations did students highlight for integration?
Top challenges were inaccuracy (62.16%) and over-reliance (39.86%), plus usability (20.27%). For integration, 20.95% supported full integration and 43.92% partial; 55.41% wanted clear usage guidelines and 46.62% wanted training for both students and faculty.
Review Questions
How do the reported effects on learning efficiency and initiative differ from the reported effects on academic performance, and what explanations does the paper offer?
Which student-perceived risks (accuracy, over-reliance, ethics/privacy, usability) appear to be most influential in shaping attitudes toward GenAI?
What does the paper’s psychometric reporting (Cronbach’s , KMO, Bartlett’s test) imply about the survey instrument’s measurement quality?
If you were designing an engineering course using GenAI, which survey results would directly inform your assignment design and assessment strategy?
Key Points
- 1
Survey evidence from 148 Chinese engineering students shows widespread GenAI use: 20.95% daily and 41.89% multiple times per week.
- 2
Students most often use GenAI for learning resources/concept explanations (55.41%), report/document compilation (54.73%), and data analysis (52.03%).
- 3
Perceived learning efficiency is strongly positive (88.52% improved), and learning initiative is also largely positive (64.19% improved).
- 4
Effects on higher-order thinking are mixed: independent thinking improved for 47.97% but weakened for 17.56%; creativity improved for 58.78% but decreased for 11.48%.
- 5
Nearly half of students report no improvement in academic performance, suggesting that efficiency gains do not automatically translate into better grades or understanding.
- 6
The dominant barriers are GenAI inaccuracy (62.16%) and over-reliance (39.86%), alongside usability issues (20.27%) and ethics/privacy concerns.
- 7
Students favor structured adoption: 55.41% want clear university usage guidelines and 46.62% want training for both students and faculty; most prefer partial integration rather than full replacement.