Monday, February 2, 2009

Baby with the bathwater?

So here we stand at the window with our baby bath full of dirty tiered composition water and question do we just chuck it out the window? So I think it maybe time to consider checking if it is absolutely necessary for the panel comp baby to take the four story drop with the second hand suds?

Now on the tier comp side I think it was a worthwhile experiment half bringing the objective and subjective comp guys together. [side note: We kind of already knew it wouldn’t work at dogcon given it didn’t work at masters, but despite this its failure at DC seemed to catch people off guard. I personally went in expecting a 6 point range]

There were several problems and many can be argued over and over from both sides but the central problem I think is it combined the “non-core” attributes of both the objective and subjective approaches.

From an objective system stand point it didn’t take into account what was in the army just what book and from a subjective view it constrained the reviewer and prevented them from assessing the army against the rest of the field.

Now the first question I have from here is can it be salvaged or is the combination a fatal flaw. I think it may be possible to get the train back on track.

Some potential changes that I think may give both more freedom and direction and hence better results:

- The tiers need to not be hard tiers that appear to cap and floor an army’s score. Rather I’d prefer to see what score we expect the average army for any book to achieve. Don’t then have a restriction on the pluses and minuses just say score this army out of 5, 7, 10 or what ever keeping in mind average army for this type is (for example) a 2 out of 5. This way the panel can mark not only in regards to that army book but also against the field as a whole.

- We need for each book a sample “average” list for each tourney. Happy to admit I WAS WRONG in opposing this earlier. I have finally come round to this after being an opponent of it as I thought people would try and loophole the standard list or take something different and cry like roger federer when they get a comp hit “what do you mean my regenerating grave guard bunker is significantly harder than the two units of zombies in the standard list? Lets have a 37 page thread on how unfairly I was treated”. But heres the thing people will cry over those four or five points either way, so if clarity will produce more consistent results and make people feel more aware of their score I’m all for it.

With those two changes I think we get a system that is more flexible and consistent. The flexibility, I believe, is the most important thing. Speaking from professional experience, as soon as you overlay multiple static rule sets you can loophole them.

[side note: it is interesting that people can consider that objective comp will eventually be the answer when if you say look at another system of overlaid complex rules tax and ask how have governments around the world started to cut down on tax structuring and arbitrage? By putting place subjective tests because people consistently found the new loopholes with every change in the traditional objective tests. But hey I’m sure we are better at designing an objective system to overlay than every tax lawyer engaged by the governments around the world who only live and breath those systems and specialised in them academically.]

But the next question is: are tiered and panel comp inextricably linked? Can you only have one if you have the other? This appears to be the argument conveniently being pressed home with vim and vigour by the formula composition guys. [Side note: I was taught this as a negotiation technique: link two elements together then argue against one and effectively get them both conceded.]

I personally thought panel comp was working pretty well prior to the introduction of the tiers and that whilst there was the occasional outlier whose results weren’t justified but we sure abandoned the approach very early given how well it seemed to go. Sure army bias was exaggerated but I think tiering in conjunction with it only compounded that bias.

One other thing I would like to encourage is for panel members to look through their results and test to see if the “average” they’ve given for each race is appropriate. Any panel I’m a member of I do this and if I vary too much for one race or seem to be displaying bias I look through again and try to form arguments against my initial feelings. This isn’t to say that I always change them just that if I’m not giving an average mark for the race then I want to be certain it isn’t because I’ve misread the “hardness” of the tournament or because of unreasonable bias.

So I guess what I’m suggesting for the way forward is, for at least larger tourneys, revert to straight panel composition, preferably with the guidance for average scores for each army provided.

The next few months will be an interesting time in Australian warhammer, I get the feeling that the ETC (a system that I actually agree with in the environment it is intended for – a battle points only tourney bringing many different tourney scenes together) is being used to shoe horn comp formulas into Australian tourneys which will be primarily for the benefit of the elite players.

No comments: