Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The things I don’t like about putting too much weight in the exams are:

* It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.

* It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources. This puts a pretty low ceiling on the level of complexity you can actually throw at them.



Exams happen all the time in real life. Or rather, situations where you can't just look up fundamental knowledge. Job interviews, presentations, even mundane work tasks - all these require you to know the basics quickly "The basics" are relative, of course, but I often point out to my students: "you don't care if your doctor needs to look up the specific interactions of your various meds. You do care if you see them googling 'what is an appendix'." Proctored, in-person exams are the only reliable mechanism we have for ascertaining if a specific individual has mastered key fundamentals and can answer relevant questions about them in a relatively timely fashion. Everything else is details and thresholds - how fast do you need to be able to recall, how deep, what details are fundamental. From there, I think it's fine to hate poorly made exams, and it's a given that many folks making exams have no idea what they're doing (or don't have the resources to do it right). But the premise of an exam is not completely divorced from reality.


I think many of us would agree that job interviews (in tech at least) are horribly broken, because they don't do a good job of testing candidates' ability to do the actual work they'll be doing day-to-day. So saying exams are like job interviews is not a positive for exams. And, for most people, the ideal is to find a job and stick with it for years, so it's not like job interviews are common, everyday occurrences.

For presentations, usually you spend a lot of time preparing for them (similar to exams), building a slide deck or pages of notes that you refer to while giving the talk (not similar to exams). Sure, you do have to be able to think on your feet, but I don't think the comparison to a sit-down exam is all that apt.

For mundane work tasks, you have the internet and whatever reference materials you want (including LLMs, these days); this sort of thing is so different from a sit-down exam that it's almost comical that you'd try to equate the two.

I'm not saying I know of a better way to evaluate learning than proctored, in-person exams, but suggesting that sort of situation is particularly relevant to real life... no, no way.


Having been both a data analyst and software engineer I agree. The data analyst one? Here is 50K of Excel rows with all kinds of weirdness in it, you're data analyst right? You have 4 hours to analyze this data. Go!

The software engineer one: here is a takehome assignment. One week later: finished!

To be fair, they both represented pretty well what work I'm going to do. The data analyst didn't show that well how much I'd also be data engineering, but whatever, I was a SWE before having a DA stint. Back to SWE again though.


>you don't care if your doctor needs to look up the specific interactions of your various meds. You do care if you see them googling 'what is an appendix'.

Sounds like you're saying that it's acceptable to be a little foggy about the limits of your knowledge, as long as you remember the core foundations. For a first year medicine student, the edge of their knowledge will include things that are the core foundation of a practicing doctor. Why should such a student be tested as if he already had several years of familiarity with the subject when this is all relatively new material to him?


I don’t completely disagree with this and I do appreciate the well thought out response. I should say that most of my recent relevant experience was as a grad student teaching assistant so I’m neither just somebody grouching that I didn’t like taking exams, nor somebody with deep experience setting the things up. Some respectful quibbles, then, since I gather you have more direct experience than me.

> Proctored, in-person exams are the only reliable mechanism we have for ascertaining if a specific individual has mastered key fundamentals and can answer relevant questions about them in a relatively timely fashion. Everything else is details and thresholds - how fast do you need to be able to recall, how deep, what details are fundamental.

I don’t think this is how people actually engage with exams. I had a lot of folks in office hours who treat the exam as the ceiling of their competence, rather than the floor, and do things like cram or try to figure out exactly what topics will be on the test to study just those. If the goal is to establish a 100% solid foundation for things you have to know to be a professional (which I think is a great goal), I prefer something like Mastery Learning to the conventional exam process. (Maybe we could call Mastery Learning conventional exams a different set of thresholds, unusual thresholds if we want to look at it that way).

> From there, I think it's fine to hate poorly made exams, and it's a given that many folks making exams have no idea what they're doing (or don't have the resources to do it right). But the premise of an exam is not completely divorced from reality.

I worked with some professors who I thought gave good exams, some who gave less good ones, so I don’t think the premise is completely divorced from reality. But it seems more like something the good instructors overcame, rather than a construct that is really helpful.


I think it's all about speed. In "real life" everything can be looked up, but exam optimizes to not even having to look it up. Then any research becomes much faster.

Whether it's good or bad I don't know, I think US higher education focuses too much on ability to produce huge amounts of mediocre work, but that's the idea behind exams.


One of the reasons I've always encouraged software people to learn to touch type has nothing to do with typing speed - it's about reducing/eliminating the cognitive load of typing, you want to be thinking in expressions (sentences) not letters. (The increase in effectiveness comes from not getting distracted by the mechanics of typing...)


In real life you need to know the options and their trade-offs to solve a given problem. You don't need to know all the techniques perfectly, but you do need to be able to characterize them and compare them, from rote memory.


I agree, I think many people who rail against exams underestimate how important memory is to more complicated skills. How can you debug a complex application if you have to keep looking up every operator and keyword in the language you're using? It'd be like trying to interpret poetry in a foreign language but you have to look up every single noun. I'm not saying people can't do it, but it's tedious, slow, and you probably wouldn't think of them as a "professional worth paying for their service". Some amount of memorization is key.


It still doesn't feel to me that those things are similar. A sit-down exam is a time-limited, high-pressure situation where you're expected to demonstrate proficiency in the things you've learned over the past several months. Sure, much of that learning builds on stuff you've learned previously, but the focus is on the prior semester (or half-semeter, for mid-terms).

When I sit down to debug a complex application, I'm drawing my prior 25+ years of experience. While I certainly would rather fix the problem faster rather than slower, I don't have a time limit, and usually taking my time (or even leaving the problem alone for hours or days) can be more effective than trying to work quickly and get everything done immediate.

The last time I sat for an exam was in 2003, and I honestly have not experienced anything in life since then that feels like that. Even job interviews have not felt similar enough to me to evoke that same feeling. (Frankly, I've enjoyed most job interviews; I don't think I've ever enjoyed an exam.) That's just my experience, of course, but I don't feel like an outlier.


> It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources.

Sort of. In real life, you are expected to have immediate knowledge of your field and (in some environments) be able to perform under pressure. I'm not going to pretend the curriculum is a perfect match for what people should know, but it does provide a common baseline to be able to have a common point of reference when communicating with colleagues. I would suggest the most artificial thing about exams is the format.

> It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.

I don't like dismissing the ordeal of people who face test anxiety, but tests are not really high stakes. There is a potential that a person will have to repeat a course if it is a requirement for their degree. At least at the institutions I attended, the grade distribution across exams and assignments, combined with a late drop date, meant that failing a course was only an option if you choose it to be. A student may be forced to face some realities about their dedication/priorities, work habits, time management, interests, abilities, etc.. It may force a student to make some hard decisions about where they want their life to lead, but it does not bar them from success in life. And those are the worse case scenarios. A more typical scenario is that you end up with a lower GPA.


>There is a potential that a person will have to repeat a course if it is a requirement for their degree.

How is that not high stakes? The result of several months worth of effort hinges on what they do during a 2-3-hour window. If you had to build something and the last step involved a procedure that could potentially tear down everything you made, you would try to find a way to redesign it so that didn't happen, or to limit the scope of the damage.


If you want an easy, stress free job, sure, nothing is high stakes. You will never be invited to meetings with executives, and you can google everything all the time.

If you, on the other hand, attend meetings, you will need both deep knowledge (requires many hours of styding) and fast thinking, and the questions will make you realize you know little about many things. You need to have general knowledge of many things, not just whatever you are building at the moment. If you are successful on these meetings, your salary can and will grow.

Also, some meetings will make you realize that college and/or university are life in easy mode. Very few things you consider hard in college will be work-level hard.


This is where the alternative of a course with the other (still monitored for graded activities) option comes in. The downside of that tends to force in person synchronous rather than custom scheduling of regular tests.

The point is more about whether the graded work is actively reviewed than which individual choice is ideal or not though. Whether it's electronic or written, remote or in person, weighted towards exams vs continuous are all orthogonal debates to the problem of cheating/falsely claiming work.

I had attended a few courses over a decade ago and just completed a degree recently. The methods of cheating have changed, but not because of pencils vs keyboards.


High stakes artificial exams can help prepare you for artificial stakes at job interviews where you need to crank out a working solution in 30 mins with jet lag and someone looking over your shoulder


That's true. They do better-prepare an applicant for a job that filters on a person's ability to accomplish arbitrary things in a vacuum that is completely disconnected from the real world.

That's probably a good thing to filter on for, say, the navigation role on all kinds of crafts (from land to sea to space). There are naval roles where navigating with a sextant and memory is an important skill to have, and to test for.

But that operating-in-a-vacuum skill doesn't relate well to roles that don't need to exist in a vacuum. In most of the jobs in the real world, we get to use tools -- and when the tools go out to lunch, we don't revert to the Old Ways.

When an accountant's computer dies, they don't transition back to written arithmetic and paper ledgers. Instead, someone who fixes computers gets it going again, and they get back to work as soon as that's done.


Obviously they're both supposed to be proxy measures, not realistic scenarios. I was mostly joking before but I do think exams provide a pretty good proxy for ability in the subject if the teacher is decent. Interviews not so much unless the applicant is similarly prepared with foreknowledge of what they will be tested on and had some time to prepare and given recent practice.


> It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources.

I dunno how you work, but I'd be getting raised eyebrows from people watching my hit google for any question required of my role.

I mean, we're not talking about using calculators here, and we're not talking about vocational training (How do I do $FOO, in docker? In K8s? How do I write a GH runner? Basically any question that involves some million-dollar company's product).

We're talking about college stuff; you absolutely should not be allowed to look up linked lists for the first time during an exam, copy the implementation from wikipedia, port it to your language and move on.

In the real world, we want people who mostly know what to do. The real world is time-constrained (you could spend 2 hours learning to do what they thought you could do based on your diploma, but they'd be pissed to find out that you need to look up everything because that's how you coasted through college).

Exam situations are more like the real world than take-home assignments: High-stakes, high-pressure, timeboxed.

If your real world does not have high-stakes, high-pressure, timeboxed tasks, then you really haven't had much contact outside of your bubble.


To me, exams are just like democracy. The worst form of evaluation, except for all others that have been tried.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: