On twitter the other day, one of my regular contacts (Rising Stars Press) posted a meme that read something along the lines of “All right folks, give yourselves 500 xp for surviving 2016” and an image of Gary Jackson. And, as sometimes happens, one stray thought had a whirlwind romance with another, and before you know it, a highly improbably and quite radical concept was born.
What if players gave the GM experience after each game session for all the things that GMs are supposed to do well?
And what if the XP so received translated into boundaries within which the players trusted the GM, would ‘go with the flow’ and not sweat the small stuff?
No, I’m not seriously proposing that such a system be implemented. But simply developing one with your players holds benefits that aren’t immediately obvious.
You see, it’s relatively easy to make a list of all the things that GMs should do for, and in, each game session. It’s quite another putting them in any sort of priority sequence, let alone having some means of assessing the relative weight the players collectively place on the different aspects of gaming.
But, if you were to state that the absolute maximum xp to be awarded to a GM was 1,000xp, with up to X points coming from this aspect of the game, up to Y being awarded for another, and up to Z being awarded for a third – and so on – getting the players to reach agreement between themselves over what X, Y, and Z should be tells the GM what their priorities are, and in the process, what areas of his game he should be focusing on.
But even more usefully, the GM gets to hear what the players think as they are discussing it.
There’s no actual need for the players to rate any individual performance by the GM; if there’s something they aren’t getting enough of, they will naturally place a greater emphasis on that, it’s only human nature.
Furthermore, suitably massaged, it can provide a guideline for what the players want the GM to spend his prep time on – which might be radically different to what the GM is actually doing – as described in Game Prep and the +N to Game Longevity.
A list of tasks
To get the ball rolling, I’ve come up with a list of thirty-four GM functions, within five broad categories – Concepts, Prep, Execution, Admin, and General. Because the list is so long, I’ve actually placed it in three columns – not something that I routinely do at Campaign Mastery. After the list, I’ll discuss each (relatively briefly).
Let’s take a quick look at what each of these encompasses, because it’s not completely clear in some cases.
Concepts are all about ideas and big-picture.
1.1 Engaging Background
Are the players interested in the background, and does the background make a difference, or is it simply hanging behind the action like set dressing?
1.2 Scope Of Background
Is the background too big, sprawling, and complicated for the players to grasp? Has the GM introduced too much too soon? Or is it too small to inspire and give the game world a distinctive flavor?
1.3 Interesting NPCs
Are the NPCs interesting to interact with, or are they meaningless cyphers that the players could care less about?
1.4 Fascinating Plots
Do the plots, in general, hold the player’s interest? Or are they mechanical plodding from A to B?
1.5 Surprising Twists
This section evaluates not just the quality of the perceived twists but whether or not they were actually surprising, or did the players see them coming from a mile away?
Prep is about taking those big-picture elements and turning them into an adventure ready for the PCs to get involved in. All prep questions come in two forms: was there enough prep done and what was the quality of the result?
Canned dialogue and narrative are essential components of any game. If the players are comfortable with the GM winging it, they may require less prep in this area. But if it’s hard to tell who’s doing the talking, they may want more prep time invested. Remember, this isn’t about how much the GM did, but how well he satisfied player expectations in this area.
How interesting and complete was the basic adventure design? Was it too big, too complicated, too small, too yellow, too anything?
2.3 Plot Logic
Did the plot make sense? What holes manifested, if any? Did the plot seem to emerge from the basic personalities of the characters (both PC and NPC) involved? Or did it seem/feel contrived?
2.4 Simulation Aids
Props and minis – did the GM have everything he needed in this respect, or some reasonable facsimile?
If there were any references that had to be consulted, did the GM have them at hand?
How well did the GM know the basic rules? If any unusual section of rules needed to be used, had he read up on them in advance? Or was the game continually interrupted by the opening of rule books?
Prep all happens in advance of play. How well the GM actually performed on the day is the province of Execution, the biggest single category.
3.1 Player Immersion
Immersion is good. How well did the GM make the players forget the mundane world outside the game table? How tangible did he make the game world feel?
3.2 Scope For Player Decisions
How much room did the GM make for players to make their own decisions? Was he prepared to let them flounder until the thought of something else to try, or did he head frustration off at the pass by introducing a new plot development when the players got lost? Could the players simply follow their noses and adventure by the numbers, or were there non-trivial decisions they had to make – and did those decisions alter the outcome?
3.3 Dynamic World
How much did the campaign world feel like it had evolved since the last game? Since the first game? Is the world dynamic, changing around the PCs, evolving and developing new angles and situations that have genuine impact on the choices available to the players and keep the game fresh? Or does the whole thing reset to a static default at the end of each adventure?
3.4 Responsive World
How much does the world evolve as a specific consequence of PC decisions, both past and present? Are there consequences (beyond mere game mechanics) for mistakes – and benefits for smart play?
3.5 Responsive Actions
How much did the current adventure evolve as a consequence of PC decisions, for good or ill? Or did the GM only have to pay lip service to player self-actualization?
3.6 PC Integration
How much did the PCs feel connected to the world, a part of it, and how much did they feel like they were tacked-on afterthoughts? Would replacing any of the PCs with someone of identical skills and stats but different personality have made a material difference to the situation or how it developed?
3.7 Difficulty Of Combat
Were the fights too easy or too hard – or just right – and was that difficulty level appropriate for the situation in-game?
3.8 Tactical Involvement
Did the players have to think about their choices of actions? Or was this a push-button RPG session in which the PCs were algorithms, with a predetermined and predictable response to every situation?
3.9 Emotional Involvement
Did the players feel like their characters were emotionally involved in the game – caring about the things they were supposed to care about, angry about the things that should have angered them, enjoying experiences that they would have found pleasant? How much did the players care about the outcome – and did that match the degree to which the PCs should have cared about the outcome?
3.10 Vicarious Involvement
How much fun was it to be a fly on the wall when not directly involved in the action? Did the group feel like leaves in the wind, each following their own almost-random path but with an overall collective direction, and how much did they feel like they were a cohesive unit, sharing in each other’s successes and feeling each other’s defeats?
How much scope did the GM give for players to roleplay their characters? Did the players feel like they were their characters at all times? Most of the time? Whenever not confronted by system mechanics? Whenever not in combat? Only when the GM provided a set piece for roleplay? Or not at all?
3.12 Adequacy Of Rewards
Would the PCs have felt adequately rewarded for their efforts? Do the players? Is there a difference, and is that difference appropriate? Were the rewards disproportionate to the situations faced?
3.13 GM Flexibility
When something unexpected happened, how well did the GM bend to accommodate it? If the players wanted to do something he hadn’t planned for, could he cope? Did it feel like he was ready for anything the players might choose to do?
Some admin is almost inevitable. For most GMs, it’s a necessary evil, to be minimized at every turn. The best use it as a future planning tool, and are experts in their own campaigns, able to answer almost any question at the drop of a hat.
How much did the character learn from the encounter, and is that reflected in an appropriate amount of XP? Are the characters progressing so slowly that they feel stuck in neutral, or advancing so fast that the players can’t take the time to enjoy what they’ve got?
4.2 Character Evolution
How much did the PCs, both individually and collectively, evolve in the course of the adventure? Were changes to characters as a result of previous game sessions at least signposted within the adventure? Does it feel like advancements made through game mechanics come out of the blue, or does it feel like a natural step on the character’s personal journeys from what they were to whatever they will become (even if no-one knows what that may be)?
4.3 Rules Knowledge
How well did the GM know the rules of the game he was running – and how disruptive was any shortcoming in that department?
4.4 Rules Interpretation
Did the GM seem impartial when adjudicating rules decisions? Was he able to make a decision on the fly when the rules were inadequate or too complicated? Did the game keep moving, or did it bog down?
4.5 Campaign Knowledge
How well did the GM know his own creations? Were there any obvious oversights, and were any of these serious enough to require a retcon?
4.6 Social Management
While the GM can’t dictate player behavior, he is responsible for managing the social situation. Having players be engaged enough that they won’t get caught up in side conversations over the top of whatever else is happening? Rotating the spotlight fast enough? Giving a fair share of that spotlight to each? If any situations arose, did the GM manage them?
Did the game start on time, and if not, how much of the blame belongs to the GM? Did the game finish early or late? Did the GM fill the hours that the game was allocated?
Finally, we have a trio of big-picture overall considerations. It’s not far wrong to say that if the GM gets these three things right, it doesn’t really matter what happens in all the other areas – but the odds are that if the GM does well in these three areas, he would also score high marks in several of the earlier areas.
How much fun was it to be part of the game? And make no mistake, a PC can be enduring absolute misery while his player is having a whale of a time! One definition of role-playing vs roll-playing that I’ve come across in the past: Role-playing makes it fun even when your character isn’t having fun, roll-playing implicitly ties a player’s enjoyment to how much his character is enjoying himself.
How much food for thought did the game offer? Was it intellectually fascinating, did it offer situations that the players had never expected or come across before? Was it original, or did it feel derivative? Was it so original that it became hard work just trying to keep up?
5.3 Eagerness To Continue
How much are the players looking forward to the next game session? Would they have wanted to keep going if time permitted? Would playing a day earlier be a good thing just because you got to play sooner?
Three systems of approach
There are two basic approaches that can be adopted to negotiating how much each of these categories should be worth. The first is to start with a total and break it up amongst the major sections, then sub-divide to reach each detail item’s worth. The second is to allocate a convenient base number to each of the detail items and then adjust accordingly, letting the broader categories take care of themselves. The third is a “score out of five” system.
Let’s take a look at how each of these would work.
In the big-picture approach, you start with a convenient total maximum award and then break it up amongst the major categories. With 34 categories, 1700 or 3400 seem to be obvious choices, but let’s be a little unconventional and choose 2000. That gives each of the five major categories a base 400 points each – after which, it’s a matter of robbing Peter to pay Paul.
You might decide that the big three items at the end should be worth half the total – that’s as much as everything else put together, by definition. That would certainly be a reasonable weighting, in my personal opinion. So category 5 is worth 1000 points and categories 1-4 are worth 1000 points between them, or an average of 250 points each.
Next, the players might decide that execution should be worth as much as the remaining three categories put together. That means that category 3 is worth 500 points, while categories 1, 2, and 4 are worth 500 points.
Opinion might well be divided over which of categories 1 and 2 are more important. I can even see the balance being different from one campaign to the next, even with exactly the same players and GM. Admin, on the other hand, is almost certainly going to be low dog on the totem pole. Let’s say that the players decide that both categories 1 and 2 should be worth three times as much each as category 1. 3+3+1=7, so category 4 would get 1/7th of the 500 points – call it 70 points for convenience – while the rest (430 points) gets split evenly between categories 1 and 2, i.e. 215 points each.
So far, then, we have: Category 1, 215 points. Category 2, 215 points. Category 3, 500 points. Category 4, 70 points. Category 5, 1000 points.
Alternative perspective insights
Of course, there are an almost infinite number of alternatives. There would be absolutely nothing wrong with a breakup of 200, 300, 600, 100, and 800 points. Or one of 100, 300, 800, 200, 600 points. Or 300, 100, 800, 100, 700 points. If the objective was to playtest a new set of rules, the breakup might be 100, 800, 800, 100, 200 – placing all the emphasis on this adventure and its execution, with very little regard for the bigger picture.
Because that’s another way to think about the five major categories: 1 and 5 are big-picture, 2 and 3 are immediate, and 4 is the infrastructure that ties everything together.
In any event, we have a breakup of 215, 215, 500, 70, and 1000 points. Now we look at each of the categories and sub-divide. Again, it’s just my personal opinion, but I would start with the smallest total and go up from there.
Category 4, admin, 70 points:
Admin represents 7 tasks, which gives an average of 10 per task. It might be collectively decided that 1 and 4 are the most important, then 2, and everything else in third place. So let’s start by making 1 and 4 worth 20 points each, and 2 worth 15 points, and see what the rest would be worth: 20+20+15=55, leaving 15; divided four ways gives just under 4 points each. To have them be worth 4, we would need to steal back a point from somewhere else, with 2 the most likely candidate. That gives a final breakdown of:
- 4.1 Experience: 20 points
- 4.2 Character Evolution: 15 points
- 4.3 Rules Knowledge: 4 points
- 4.4 Rules Interpretation: 20 points
- 4.5 Campaign Knowledge: 4 points
- 4.6 Social Management: 4 points
- 4.7 Timing: 4 points
As before, there are many alternative choices that would be equally valid. 10, 10, 10, 30, 5, 0, 5, for example. Or 15, 5, 5, 20, 5, 0, 20, which should tell the GM immediately that he needs to keep a closer eye on his timing (he should automatically always be keeping a careful watch on his fairness).
Category 1, concepts, 215 points:
Five tasks, an average of 43 points each. That’s a slightly awkward number; so ignore the 3 and set a base value of 40 points each, with 15 bonus points to be awarded.
For my money, if I were assessing where I place my priorities, there isn’t a lot to distinguish one of these as more important than the rest. 3 and 4 might be a shade more important than 1, 2 a shade less important, and 5 the low man on the totem pole by a small margin. So, lets say +10 each for 3 and 4, -10 for 2, and -20 for 5, increasing the bonus pool to 25,
That gives allocations of:
- 1.1 Engaging Background: 40;
- 1.2 Scope Of Background: 40-10=30;
- 1.3 Interesting NPCs: 40+10=50;
- 1.4 Fascinating Plots: 40+10=50;
- 1.5 Surprising Twists: 40-20=20.
What to do with the bonus pool?
At this point, the players have to decide what to do with those 25 unallocated points. They have several choices: they can simply forget them; they could add +10 to 3 and 4 and +5 to 5, increasing the emphasis on the two major items and diminishing the de-emphasis on plot twists; or they could define some additional skill that fits in the category and give it the entirety of the 25 points. That category might be “Game Physics”, or it might be “Historical Knowledge”, or it might be “Educational Value” (in a game used for teaching students), or it might be something broad like “Creativity”. Or it might be a couple of these, each receiving a share of the 25 points – and possibly triggering a reassessment of the amounts already allocated.
Remember, the point of the exercise is for the players to define the relative importance they place on various aspects of the GM’s role in the game and quantify those results for the GM to use as a planning / prioritization / self-improvement tool.
The other categories
I could work through the other categories, but the above examples really sum up the entire process between them. So, instead, let’s turn our attention to method 2.
The second approach starts with the assumption that all aspects of the GM’s craft are equally important, at least in theory, and then modifies that theoretical result to something that accords a bit closer to reality. There are 34 tasks on the list; if we give each item a base score of 25, that’s 1,190 points in total. If we’re aiming for 1000, we would need to deemphasize some tasks by a collective total of 190 points. Dropping a task from 25 to 10 points in value saves 15 points; dropping about 1/3 of the list would get us close to the 1000 points target.
This really is very similar to the first approach, except that it increases the scope of the adjustments. You could boost “fun” by reducing “plot twists”, or emphasize Dialogue while deemphasizing Rules Knowledge and Dynamic World.
What it often means is that you don’t get the big swings that the first system can produce. Take category 5 of the “big picture” system: we (hypothetically, as an example) gave it a total value of 1000, which means that each task has a value of about 300, with 100 points left over. Compare those values with the ones that we were considering for the tasks in category 1, where we were sweating five-point differences! While the initial allocation of points and emphasis at the big-picture scale seemed reasonable, the results when you get down to the nitty gritty can be quite startling and even carry implications that weren’t intended.
On the other hand, it takes a big task and turns it into a series of smaller, more manageable tasks. So there is a lot to commend it.
Frankly, neither approach is all that ideal. Which is why I came up with a third alternative.
The “Score out of 5” system
Each player rates the importance of the five major categories out of 5. They then rate the importance of each task within the list out of 5. The GM gathers these results; he multiplies each of the tasks by the major category rating, then totals the results from each player to get an aggregate. Finally, he produces a grand total of all these individual ratings; dividing the overall score required (be it 1000 or 5,000 or whatever) by the total gives a conversion factor, which he can multiply the unrounded scores by, then rounded as he sees fit (to the nearest 10 or 20).
Let’s see how it works. I’ve invented 3 “players” with different priorities and preferences, then had them do the “ratings by 5”. The table below shows the results of the whole process (Never fear, I’ll walk you through it!)
Player 1 is a Storyteller, and even the roleplaying of his character is secondary to his engagement in an interesting plotline. Player 2 is a typical roleplayer, whose primary interest is in playing his character, and everything else is measured against its contribution to that end. Player 3 is someone who simply wants to let off some steam at the end of the week by killing something (or at least, beating it to a pulp). As you might expect, keeping this disparate group happy, week after week, would not be easy, with the first two often united in common interests against the third (the secret would be making the combat an integral part of the plot, using the resulting commonality between players 1 and 3 to balance the trend toward plot/roleplay at the expense of combat).
The first column identifies the category and the tasks within each category.
The second column has the scores player 1 gave to each category and to each task, out of 5. For example, Category 1 has been given a rating by him of 4 out of 5 for importance, and task 1.1, an engaging background, has also been rated by him as 4 out of 5 for importance.
In the third column, I’ve multiplied each task rating by the category rating – so, for player 1, this is a result of 16, (four from the category multiplied by four from the task).
The fourth and fifth columns give the ratings and multiplied products for player 2 in the same way. For category one, player two gave a rating of 2, and for task 1.1, a rating of three, giving a combined value of six.
Ditto the sixth and seventh columns, which give the results from player 3. Her rated category 1 as a one, and task 1.1 as a two, so the product for his scores for this task is two.
The eighth column is where things get interesting. I’ve added the product for task 1.1 from player one (16) to that for the task from player two (6), and that from player 3 (2) to get a subtotal of 24. At the bottom of the column, I’ve totaled all these scores, and ended with a total of 1,058. Underneath that, I divided 2000 by this total to get an adjustment factor of 1.89. In theory, if I were to multiply each rating by 1.89, they should add up to a total of exactly 2000.
In column 9, I’ve done exactly that (without bothering to check the total). For task 1.1, the subtotal of 24 turns into an adjusted value of 45.36.
I didn’t check the total because in Column 10, I’ve rounded each results to the nearest 5. For task 1.1, that was (quite obviously) 45.
Again, in theory, the total of all the rounded, adjusted, results should be 2,000. Again, I haven’t bothered checking this; from what I noted as I produced the table, the rounding errors appear to be on the high side, though, so the end result might be 2030 or something. In the real world, I would then tweak the results to distribute that error and get an exact total, but that wasn’t necessary for the example.
This method combines all the advantages of both the alternatives with none of the drawbacks. Values for any specific task range from a minimum of 1 to a maximum of 25. The geometric nature of one number multiplied by another means that relative value rises disproportionately to the size of the small increase in the actual ratings – going from 4 to 5 might not seem like a big deal, but going from 4 times 4 to 5 times 5 is a difference of 25-16=9, or an increase of more than 50%.
At the same time, the effect of totaling the results from each player mean that if there’s something that only one player rated highly, it is the same as all three rating it of medium-low or mediocre significance. Only things that they all agree on get to the really big scores.
It also means that if one player habitually rates such things high, or low, such variations are evened out by the system, removing individual player biases.
Analyzing the example results
So, what was the washup of this theoretical example? The highest score by a small margin was “fun”. Players will forgive almost anything if they are having fun! It scored 115 points.
The only other result in the triple digits was 3.5, Responsive Actions. The players are placing an emphasis on NPCs reacting to PC decisions, suggesting either that this is a weak point of the GMs, or that they want a continued focus on it.
Scoring 95, and coming in in third place as a result, is 3.13, GM Flexibility. There seems to be some concern about plot trains, which definitely ties into the Responsive Actions result.
Almost scoring as highly at 90 points is 3.2, Scope for player decisions. There is a definite theme developing here!
Tied for fifth place, with 85 points, are another two items from the execution category, 3.6, PC Integration, and 3.8, Tactical Involvement. The players want to feel more strongly that their characters are a part of the world, and want combats to be more tactical than straightforward fights; again, the same theme shows up.
Two execution items also tie for 7th place, with 75 points, followed two execution items and the remaining general items at 70.
That means that nine of the top 12 scores are to be found in the execution category, and the other three comprise the general category. The highest score in the concepts category is 50, the highest in the prep category is 60, and the highest in the admin category is a 55. These are roughly half the highest-rated item.
All in all, the picture emerges of a trio of players who are fairly happy with the campaign the GM is running, with just one area needing specific attention.
Wrapping up the bundle
This article was originally subtitled “crawl before you can walk”. I’ve seen too many novice GMs try to run a marathon before they can even crawl. Don’t haul out and use your best idea for your first campaign; it will almost certainly prove to be too big and complex for your level of expertise as a GM. Start with something simple and fairly generic, with a lot of blank spaces; then add to it, week after week, filling in those empty places. This week, they discover a clever twist on Giants; next week, a twist on the relationship between piety, religious faith, and nobility; the week after, do something interesting with the politics of a new kingdom. Master the “just in time” approach and build your campaign using the Baby Steps In Campaign Design technique that I wrote about way back in Roleplaying Tips number 308.
One final application of the “scores out of five” technique merits mention as a closing thought: The GM could always do the survey on his own to assess the priority that he is currently placing on things. Because there is only the one “player” providing scores, the Factor would be relatively high, but the results would be directly comparable with player expectations and requirements as shown by their results, and should be highly enlightening.
This is a simple tool, but capable of producing profound insights. Make of it what you will – but remember that small steps in a given direction can have a big impact; if a campaign is mostly working already, don’t throw the baby out with the bathwater!