[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: A different way of assessing applicants?



Hi,

I didn't have the time to answer these questions in the last week
(though I discussed a bit of this at the LinuxTag).

Matthew Palmer <mpalmer@debian.org> writes:
> Several questions for the gathered masses:
> * (For other AMs) Has anyone else tried this in their AM activities? 
> 	Feedback?

I'm currently testing a different approach to P&P with one of my
applicants. The first task is simple bug triage (which ensures that
people know how to work with the BTS, the control bot and submitters). I
plan to do something with QA (perhaps preparing a QA upload of a package
or something like that) and i18n as second and third step. 
Some of the areas covered by the usual P&P templates can't be done like
this, the gpg stuff is an example for that - but actually doing work and
answering four or five questions seems to be far better than the current
way.

> * (For FD/DAM) What difficulties do you see in doing the final checks for
> 	applicants assessed under a scheme of this sort?  In other words,
> 	what should I make sure I include in my evaluation regime to ensure
> 	that the end result is useful for your work?

Hmmm. That's still one of the things I'm not sure about. Whatever you
do, *please* provide a full mail log. Depending on the things you ask
your applicants to do, an overview about the stuff they've done and some
pointers to more information (ML discussions, PTS pages, bug numbers,
...) would be nice. 
I really can't say what I want to see, as I have no idea what you plan
to do.

It would be nice if you could provide a rough overview of the tasks
you've planned until now.

Marc
-- 
BOFH #167:
excessive collisions & not enough packet ambulances

Attachment: pgpA2KXg3Kz6H.pgp
Description: PGP signature


Reply to: