I’ve found that when I do poorly on a test I tend to criticize testing in general, so here goes.
Anyone else notice a huge discrepancy between what we are tested on on the SAT, MCAT, ACT, or whatever [A-Z]*T test that we take versus what is expected of us when we enter college and the workforce? On those tests, you’re given a calculator, a pencil and your brain and expected to compete with the rest of your classmates. However, when you enter college you’re encouraged to collaborate with other students, be resourceful and persistent in your research, and there’s usually no multiple choice.
One of the things that was nice about having a programming background when taking those tests was that if a question came up that was a) tedious or b) somewhat hard, I was generally able to write a quick program on my trusty TI-83 to solve the problem. I don’t think it’s cheating – it was using my resources available to solve the problem. But there’s no question that I was at an advantage over the other students because I was able to better utilize my existing tools. A comparable example would be if you were expected to take the test with a pencil that could only write X amount of words or numbers, and I was allowed to take the same test with a pencil that was able to write 100 times more.
So in this way, those tests are unfair. But I also think it’s a tad counterproductive. Students who study for those tests stay up for nights memorizing facts that, once they go to college or enter the workforce, they’ll be able to do a quick Google search to find the answer. Why not test students on problems that are more relevant?
What I would propose is the following: a 2-hour test that is a similar format to the ones currently in place, followed by a 1-hour group exam (or perhaps move the group exam to the front, to avoid anyone looking at the door). The idea here would be to group the entire testing room into groups of four or five, and those students would move into seperate areas of the room to take the group test together. Each student is responsible for writing down their own answer, but the tests would be graded as a group so work could be divided evenly without repeititon.
What does this accomplish? First, a sense of what a real job will be like – having access to other group members will be like having co-workers and colleagues in the real world. Second, real problems can be assigned – problems that are actually challenging and provide a sense of accomplishment when completed. And finally, colleges wil be able to see how students perform in groups. This would be incredibly useful knowledge to colleges that are trying to make decisions on students.
And there’s my little rant on standardized testing and education in general. I’ve said this before, and I’ll say it again: education is the single most important issue to our country today – most of the problems with today’s United States can be traced to education, or lack thereof, and its effects on adult decision-making.
Saw an article on Slashdot about how Apple seems to be intentionally slowing down competitors’ software. I’m not really sure why this is a win for anyone – as Apple, don’t you want people to be able to use the software they want to use? If you want them to use Safari, make it not suck!
It’s another issue in a long line of issues I have with the browser wars. To summarize, I have no idea why what you browse with should matter at all. And in fact, that’s what groups such as the World Wide Web Consortium aim to fix – in their belief, every page should look the same, regardless of which browser you visit from.
Many of you who read this blog probably know that my browser of choice is Internet Explorer 7. But Jimmy, you ask, confused, “You just said you wanted standards compliance, which is what IE famously avoids. What’s up with that?” I use IE7 because, and this may sound cliche but I don’t really care, everyone else does. And because everyone else does, every web developer out there may curse under their breath every time they have to fix something for Microsoft, but they’ll do it because right now, IE has over 50% market share. And to be fair, many of the standards that are now in place came into place AFTER Internet Explorer first implemented their own versions (ActiveX is a good example).
In essence, IE has good reasons for not being as 100% standards-compliant as maybe it could be – a lot of Microsoft software relies on these proprietary standards for it to work properly. The plus side is that we get some cool technology, and the down side is that it doesn’t work perfectly in every browser, but because IE is free and Windows is installed on over 90% of the world’s computers… I’m willing to overlook the occasional standards headache if I get technology such as Windows Update that wouldn’t work in any other browser. And even though Microsoft really has no reason to change, no reason to get better, they continue to do so – IE8 is supposed to be even more standards-compliant.
The title of this post is “Apple”, and yet I’m talking about IE7. So I’ll switch back to Apple and ask, “Why?” Why, Apple, did you even create Safari? Why not publish these faster APIs so that browsers such as Firefox and Opera can run well on your OS? For all Apple says about being standards-compliant and welcoming to software developers, they’re not. In fact, every product Apple has ever created tries to reinvent the wheel – they never create something truly new, like every Apple fanboy claims. Fortunately for them, a lot of the time they do a nice job doing so.
And by the way, out of the four browsers I have installed on my Windows machine, Safari is my last choice – it eats up 150+ MB of RAM compared to about 50 for IE7, 40 for Firefox and 20 for Opera, looks ugly, and doesn’t offer anything new.