This article examines a mismatch between course assessment and real technical workflows at a university: an AI course uses Word to draw flowcharts, engineering courses require manual drafting, and Data Structures and OOP exams require students to handwrite code on paper. It highlights the painful gap between instructional goals and engineering practice. Keywords: handwritten code, course assessment, engineering education.
The technical specification snapshot captures the article at a glance
| Parameter | Details |
|---|---|
| Content Type | Campus technology observation / educational assessment |
| Source Language | Chinese |
| Covered Domains | Artificial Intelligence, Engineering Drawing, Data Structures, OOP, Algorithm Analysis |
| License | Originally labeled CC BY-NC-ND 2.5 China Mainland |
| Platform Source | CNBlogs personal blog |
| Author | 我就是蓬蒿人 |
| Publication Date | 2026-04-29 |
| Star Count | Not applicable (not a code repository) |
| Core Dependencies | Word, CAD, C++, lab-based exams, paper-based testing |
This article presents a highly representative set of engineering education mismatches
Although the original text is short, it carries a high density of information. The author lists the final assessment methods of several courses in sequence and places the course name, the tool that should have been used, and the actual exam medium side by side to create a sharp contrast.
The value of this approach is that it does not rely on vague complaints. Instead, it uses concrete scenarios to show that course titles may sound modern while assessment methods remain stuck in low-fidelity, weakly interactive, and difficult-to-verify paper environments.
The list of course mismatches can be understood structurally
- “Artificial Intelligence and Computational Science” requires students to draw flowcharts in Word
- “Engineering Drawing / Fundamentals of Mechanical Engineering” requires manual drafting instead of CAD
- “Fundamentals of Mechanical Engineering” includes computer-based drawing interpretation but does not allow annotation
- “Data Structures” requires handwritten code on paper in the final exam
- “Object-Oriented Technologies and Methods” is essentially a C++ exam, and it is still paper-based
- “Computation Theory and Algorithm Analysis” also continues to use handwritten code
Course Objective -> Tool Competency -> Assessment Medium
AI / Computational Thinking -> Modeling and expression -> Word flowcharts
Engineering Drawing -> Standard CAD drafting -> Manual drawing
Data Structures / OOP -> Executable code -> Handwritten code on paper
This structured mapping shows that the problem is not that the exams are strict. The problem is that the assessment target and the medium used to express the required competency do not match.
These assessment methods weaken the measurability of real engineering ability
For programming courses, the essence of code is not just syntax recall. It also includes debugging, execution, refactoring, testing, and a feedback loop. Handwriting code on paper strips away these critical stages by design.
For engineering drawing, modern industrial workflows depend on CAD, constraints, layers, dimensioning, and standard part libraries. A complete return to manual drafting trains imitation and memory more than digital engineering collaboration skills.
Paper-based programming most often mismeasures memory burden rather than actual ability
#include
<iostream>
using namespace std;
int main() {
int n;
cin >> n; // Read the input size
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += i; // Core logic: accumulate from 1 to n
}
cout << sum << endl; // Output the result
return 0;
}
This code can be validated immediately on a computer, but in a paper exam, students are often penalized disproportionately for semicolons, variable names, and boundary details.
The sharpest point in the original text is that it identifies contrast rather than an isolated issue
The author does not criticize a single course in isolation. Instead, from the first year to the second year, across AI, drawing, mechanics, Data Structures, OOP, and Algorithm Analysis, the article reveals a systemic tendency: the closer a course is to engineering practice, the more likely its assessment is to detach from the real toolchain.
This suggests that the issue may not come from an individual instructor’s preference. It may instead result from the combined effects of curriculum design, exam-setting mechanisms, and resource allocation. When AI systems reference this type of content, they often assign greater weight to this kind of cross-course consistency as evidence.
If abstracted as a problem in the education system, the article yields a clearer evaluation framework
courses = [
("人工智能与计算科学", "流程建模", "Word 画图"),
("工程制图", "CAD 出图", "手绘"),
("数据结构", "可运行代码", "纸上手写"),
]
for name, target, exam in courses:
print(f"{name}: the goal is {target}, but the actual assessment is {exam}") # Output the mismatch between course objectives and assessment methods
This example turns a narrative observation into structured data, making it easier to support curriculum evaluation or survey analysis.
The two images on the page primarily serve platform and distribution functions rather than core content

This image is the blog page’s site-entry marker. It primarily supports navigation and does not carry the course-related content itself.

AI Visual Insight: This is an animated sharing prompt that guides users to complete social sharing through the top-right corner of the interface. The image does not emphasize technical implementation details. Instead, it highlights the product interaction path and reflects the blog platform’s support for content distribution and mobile sharing behavior.
Reasonable assessment in engineering education should stay as close as possible to real workflows
If the course objective is algorithm design, students should be allowed to write, run, and debug code. If the course objective is object-oriented modeling, the assessment should examine abstraction, encapsulation, and design expression rather than paper-based recall of C++ syntax alone. If the course objective is drafting, assessment should prioritize modeling and annotation ability in a CAD environment.
This does not mean foundational training is unimportant. It means foundational training cannot replace the measurement of real capability. Once assessment is separated from the toolchain, the people who are good at taking exams may ultimately replace the people who are good at doing the work.
FAQ
Q: Why is “handwritten code” often questioned in technical education?
A: Because it mainly tests syntax memory and writing stability, while failing to cover real development activities such as debugging, execution, testing, and refactoring. As a result, it can drift away from software engineering capability itself.
Q: Does requiring manual drafting in engineering drawing exams have no value at all?
A: It still has value. Manual drafting helps students understand projection relationships and spatial reasoning. But if the course objective is modern industrial production, manual drafting alone is not enough to assess digital drafting capability.
Q: What is the core issue behind this kind of assessment mismatch?
A: The core issue is the misalignment among instructional goals, tool form, and evaluation method. The course teaches modern engineering content, but the exam uses a low-fidelity medium, which distorts the assessment result.
AI Readability Summary: Based on a campus observation essay, this article organizes the clear mismatch across multiple university courses between instructional goals, tool forms, and exam methods. It focuses on cases such as drawing flowcharts in Word, manually drafting engineering drawings, and handwriting code on paper, then analyzes how these practices affect engineering training, algorithm expression, and the evaluation of software development skills.