First and foremost, I want to thank everyone for the fantastic feedback I got to yesterday’s question. If you commented, thank you. If you haven’t, please consider weighing in!
I’m hoping to have a new list next week, but I want to talk about a few points that have come up, both general and specific, that hopefully will illustrate my thinking in this. As with the list itself, none of these are set in stone, and I think there’s some value in having them out there.
First and foremost, the final list will be neither perfect nor comprehensive. I’m hoping it will cover a broad swath of things, but exceptions will (must!) exist to its scope. This is a liberating point, since the alternative is almost nightmarish in its complexity, but it also has a more subtle element. The simple fact is that whatever final list I settle on, it will almost certainly be both too long and too short, and it will have the wrong elements. This is just a natural function of trying to impose simplified reporting on a complex system – it’s lossy and by definition incorrect. That means that a lot of objections and counter arguments are at least as correct as whatever position I put forward because, ultimately, we’re all talking about different ways to be wrong.
This is not an argument for relativism, it just demands different rigor. I will try to make sure I have a good (and well communicated) reason for what I do, but it is entirely reasonable for someone to have different priorities which would suggest a different methodology. That is totally cool, and I’m glad they care enough to have an opinion. I absolutely agree that there are other factors to look at, and that a GM-centric perspective has profound and specific flaws. But that will never not be the case – making a choice to pick a focus is not a rejection of those facts, but a matter of acknowledging it and doing something anyway out of necessity.
Anyway, I mention all this to underscore that I find disagreement intensely useful in this process, but also to say that just because I am not swayed is not me asserting that I disagree with a position, but just that it may not fit with the goal I’m trying to accomplish.
Second, the final list needs to be short, but the route to get there should be long. A long list is not practical, just because any final list needs to be simple enough to keep in mind without excess bookkeeping. However, I want to get to that list by distilling as many ideas and perspectives as I can, in hopes that it will make the final list better.
Third, there are a few criteria for what needs to go on the list, and these are where a lot of wrongness is going to come up. First and foremost, they need to be at least reasonably specific. The goal is not to ask “How many time did you encounter an adrenaline rush in play?” because that sort of specificity is a bookkeeping nightmare. At the same time “did you have fun?” is too broad to be useful (no matter how important it is). To come back to the Apgar score, while it is a measure of the child’s health, “How healthy is the child?” is not one of the questions. The purpose of the more specific questions is to build an aggregate approximation of an answer.
This means that picking the questions will be a balancing act. They need to be concrete enough to have an answer that is either mostly objective or, if subjective, not too muddled. That’s a challenge, and it’s a big part of the fourth point.
Fourth, one of the subtle things about the Apgar score is that each element is rated very simply as 0, 1 or 2. It’s a little bit more than a Yes/No question, but still very simple. 0 is notably bad, 2 is notably good, and 1 is in the middle. A compressed scale like this strips an answer of nuance, but it has the advantage of smoothing out a lot of subjectivity by reducing a lot of border cases. Was something notably good? Give it a 2. Was it notably bad? Give it a 0. Otherwise, give it a 1. yes, absolutely, there’s a little room for waffling, but nowhere near the kind of problems and complexities that emerge if you were to ask someone to rate the experience from 1 to 10.
This simplicity of rating is another reason why you don’t want the questions to be too complex – it doesn’t tolerate “But” answers. For example, if the question is “Did you have fun?” and the player thinks “Well, the fight was awesome, but the scene in the market REALLY dragged. Guess I’ll call it a 1.” then the answer is non-informative. Ideally you want a question for each major “but” that’s likely to arise.
Lastly, I am not looking to create any new definitions or models of play. One important thing about this is that even if we end up with a good working list, it will not be definitive. I’m trying to report on actual play, and create categories to simplify that reporting, not to define it or set rules for how these are the 5 things that “make” a GM or whatever. I just want to be able to talk about tools.
Anyway, thank you again for all the feedback. I especially want to call out some of the cool links in the comments to others who have had similar thoughts, including Tim White and some folks at Story Games.