Saturday, March 10, 2018

The Scientific Method

Very well, if you've read the last post ... how can we advance our games along the lines of hypothesis, research, testing and proof?

To begin with, we're the scientists; and like those early investigators of the 17th and 18th century, our resources are limited and our collection of guinea pigs is limited.  You might get yourself into a position where you can study the response of ten or twenty people, but chances are that right now you only have your party to work with.  Rest assured, this doesn't matter.  You're not curing cancer, you're only improving your game ~ and you're never going to run it for the mass majority, anyway.  What matters here is that you get your methodology under control, and that you stop wasting time.

I have some suggestions; these are not mine.  They are based on the principles of the scientific method, which has stood the test of time for nearly four centuries, so it dwarfs all of us.

1.  Stop Assuming that every new effort, system, plan or design is going to work.  A hypothesis is a proposal, not a solution.  It is a working guess at the questions or problems that have arisen, associated with your particular game.  Once you get it into your head that "new" does not automatically equate to "better," you have a much better chance of systematically doing the research on what's "old" before kicking it to the ditch.

An hypothesis is a response to a pattern that we have noticed during play.  The way the players respond, or act, when encountering particular things, the level of resistance or resentment about certain rules and so on.  The pattern need not be negative: we might want to know why players responded so well to something we did, so that we can reproduce the effect.

As I said, the hypothesis does not look for a solution!  It is, rather, looking for the cause behind something.  We're trying to explain, not eliminate.  Therefore, the proposal we make is not about what might happen if we do something; it is, rather, an explanation.  "I think the players were resistant to running away because ..."; and then going from there.

We should not begin from a place like, "If we do this, the players will run away next time."  That is poor methodology on many levels: first, it assumes already that we know why the players ran away; it assumes we already have the solution; and we're already biased towards the solution we've created, because we haven't based that solution on any research, but rather on our gut.  Our personal observations, then, are bound to remove any practical benefit we might get from the experiment.

2.  Test the hypothesis.  Fair enough ... but how?  The guideline here is to build an experiment that changes nothing about the previous situation, while enabling us to take notes.  Let's say, for example, that recently the party refused to run away from a fight they were losing, and as a result they nearly died or a total-party-kill resulted.  And now, to understand better what happened, we want to build an experiment that we can test.

Let's pick a hypothesis.  I'll propose three:

  • The Players don't seem to care very much about their characters; they're ready to roll the die and hope for good results because they're not really losing anything.
  • The Players won't give up a fight from a sense of shame; they'd rather lose straight up than feel like cowards who ran from a fight.
  • The Players can't see it coming.  Everything seems fine, they think they have it under control, but they don't realize the circumstance until it is too late.

I'm going to suggest the second hypothesis: that the players feel shame.

Now, standard practice for DMs who have just caused a TPK is to rush at a solution: "I'm never going to let that happen again."  This is no way to build an experiment.  During the TPK, as it presented itself, the DM was likely in as much distress as the players ... and was, therefore, not paying attention.

To test our hypothesis, what we want is to engineer another possible TPK, watch what happens and make notes.

[As an aside, I suggested this scheme to my daughter, who immediately went to this place with it:


... sorry I haven't got a better copy]

Now, relax.  I'm not suggesting that parties are guinea pigs and that we should deliberately kill them just so we can watch.  I'm not Dr. Mengele.

I am saying, however, that if we create the potential for another TPK, knowing that the player's behaviour will once again be tested in the same manner, we can prepare ourselves in advance and take notes (mentally) about what happens.  Were this a legitimate experiment, we could have an outside observer, preferably a sociology or psychology student, come along and sit in ... but that is probably out of the question.

Whatever the reader's personal take on this proposal (and obviously, it would not be questionable if I had chosen a less sensitive subject than TPKs), our goal here is to gather data.  What do the players say?  Do they equate the present situation with one that occurred earlier.  Are some people suggesting that maybe everyone should be pulled back, only to be shut down by other, more reckless players?  Do the players seem to draw upon irrational bravado?  Are there signs of comprehending that they're going to lose?  What happens?

We can't draw a general theory about the potential and implementation of situations resulting in total-party-kills without examining the data, refining our hypothesis, observing relevant, isolated situations (single player deaths), rejecting bad guesses that isn't supported by the data and, on the whole, finding out if we know what the hell we're talking about.

3.  Stop Guessing.  Virtually all the content surrounding the betterment of campaigns is nothing more than guesswork.  If we try A, B might result.  This could improve your running.  "I'm not saying this is right, this is just the way I do it."  And so on.

If you don't know something, stop presenting the proposal as though it is, "known."  I could just as easily make a hypothesis that all online DMs who talk about their game worlds or systems are influenced by knowing that they are being observed by their own players.  As a result ~ still hypothesizing ~ they puff up their feathers in order to look more sure of themselves than they really are.

To find out how much they really know, it is necessary to a) test them personally, by asking questions, to see what sort of clear, factual responses you get, as opposed to nonsense hedging and misdirection; and b) test their players, asking what they think of the DMs position and advice.

For myself, my players are right there to be asked.  Some will definitely not agree that I am a good DM; there have been hard feelings all over the online campaigns.  My data says that I am a good DM for some players, but I am not for many, many others.  I don't imagine that anyone can be "good" for all the players ~ realistically, I just have to be good for enough players.  That is a general theory I've developed.

I expect people reading this blog to disagree with me, and often; I am just surprised how often they seem to disagree on matters where no evidence is being presented on their part, but plenty of evidence exists on mine.  I believe, from my observations, based on the grammar being used to express themselves, that "guessing" is more commonly relied upon than knowing.

If a DM has been running games for 30 years, it is probable that they are a good DM for a sufficient number of players ... and it is also probable that their experience at recognizing patterns in their games results in doing the right thing when the moment comes along.  It does not follow, however, that this means they "know" what that right thing is.  More likely, given the advice, given the patterns of speech and given the lack of hard data presented, they are responding instinctively to a problem, not cognitively.

And that's fine.  For most of us, instinct is more than enough to get us through.  It will make a great firefighter, a great cop, a great doctor, a great artist and a great lover.  What it will not make is a great educator.  An educator has to be able to explain how and why something works for someone who doesn't understand it; and that's not possible with only gut instinct to guess from.

So before trying to educate yourself, start from learning, not guessing.

P.S.,

Obviously, the TPK experiment can't be performed just once.  We'll never duplicate results that way.

2 comments:

  1. I do not know where you are going with this series, nor what your background in research and hypothesis testing is, but I would suggest looking into the Plan-do-check-act (PDCA) method and the Define-Measure-Analyze-Improve-Control (DMAIC) method of process improvement. They are essentially the same thing but PDCA is for simpler issues than DMAIC. I would also suggest creating only null hypothesis during rules testing and solution implementation as they are easier to analyze.

    My job is applying these methods/tools to problems and I have used them effectively to improve my gaming as well, especially when related to rules changes as they are the easiest form of change to monitor. I've also done a few hypothesis test on running methodology, but they are harder to monitor.

    Here are the relevant links to wikipedia for those three topics:

    https://en.wikipedia.org/wiki/PDCA
    https://en.wikipedia.org/wiki/DMAIC
    https://en.wikipedia.org/wiki/Null_hypothesis

    ReplyDelete
  2. Noting that I like this series and would like to see more.

    ReplyDelete