Who writes/QA's the tests in use at Upwork?
I recently plowed through a few, and at least for the technical ones, they are ripe with completely irrelevant questions (for instance, it is not even a remote requirement to know rpm to use mysql). Not to mention some questions whose answers are poorly formulated, making it hard to differentiate what the examiner even means -- with the unavoidable consequence of the examinee who might otherwise know the solution not be able to pinpoint it. And of course, questions that are severely outdated (PHP4-questions in a LAMP test?) Some questions even look machine-translated from some other language.
Other tests range from obvious, "For Dummies"-style questions to somewhat advanced implementation-level details of specific software that a medium-level user couldn't answer even if they had the manual available for reference. What purpose do the intensely basic questions pose, if any, in the context of a test that is otherwise relatively challenging (other than being just filler material to bloat the test)?
This isn't meant to be as harsh as it sounds, but the impression I have after a limited selection of tests is that they are very shabby. Have I had bad luck in choice of tests thusfar or is there a reason for why the tests are the way they are?
Thanks for sharing your feedback. Upwork skill tests are offered by a third party and the team is currently working on improving the tests we have, removing outdated tests and adding new ones. It's work in progress however, and take some time.
There is interesting feature about test for old technologies, that noone liked and noone uses: if test were passed by 3-10 freelancers, you can pass it to take 1st place and have a fancy badge in your profile))
I agree there is a problem with trying to create multiple choice questions for some of the material. I discovered this in the grant writing test. I have 20+ years of experience in writing grant applications for both government and private party funding sources.
The test questions did not reflect my knowledge of grant concepts. Instead it asked questions that focused on things like the application format as if every one is the same. For example, the test asked for the order of certain elements. It's silly to try to guess this - every grant application will have its own outline for the elements.
It would be much more telling to ask what items are contained in a needs statement than to ask if it comes before or after another section. There were no questions (that I recall) about budget development although it is a critical part of every grant.
As an aside, it would be cool if we could show 3rd party tests on our profile from sites like HackerRank, showing how many challenges we have completed in each language.
Yeah, the Unix test was quite bad and all over the place. I've been working with Unix systems for years and some of the questions on the test is not something that you would use often or remember, even if you've been working with Unix for 20+ years.
I just joined and gave one of the tests a try. There were ambiguous/poorly worded prompts, most of the questions were trivia and some of the questions showed up multiple times with minor changes.