Jump to content


Photo
- - - - -

Mash temp and attenuation


  • Please log in to reply
59 replies to this topic

#41 positiveContact

positiveContact

    Anti-Brag Queen

  • Patron
  • PipPipPipPipPipPip
  • 68886 posts
  • LocationLimbo

Posted 22 August 2016 - 07:37 AM

I think I would have spread the mash temps out more.  like 149 and 156.



#42 HVB

HVB

    No Life

  • Patron
  • PipPipPipPipPip
  • 18047 posts

Posted 22 August 2016 - 07:42 AM

I think I would have spread the mash temps out more.  like 149 and 156.

I agree and I would have rather seen even 159 so you get a full 10 degree swing.



#43 positiveContact

positiveContact

    Anti-Brag Queen

  • Patron
  • PipPipPipPipPipPip
  • 68886 posts
  • LocationLimbo

Posted 22 August 2016 - 07:56 AM

I agree and I would have rather seen even 159 so you get a full 10 degree swing.

 

I wasn't sure of the practical limit for a "normal" mash.



#44 dmtaylor

dmtaylor

    Advanced Member

  • Members
  • PipPipPip
  • 325 posts
  • LocationTwo Rivers, WI

Posted 22 August 2016 - 07:37 PM

You guys are looking at Brulosophy (and probably Experimental Brewing) wrong.  None of us are saying that our results are anything more than a data point to use as you please.  None of us intend for them to be taken as absolute scientific results.  Dave, we've talked to statisticians about this and a p value of .15 would be pretty worthless according to them.  The also point out that a p value does not represent an absolute conclusion, but rather that something is worthy of more study.  Check this out...https://www.stats.or...c-significance/

I'm not a statistician but I almost played one on TV.  You might want to take a look at this:

 

https://blog.minitab...s-in-statistics

 

I like this part:

 

"Keep in mind that there is no magic significance level that distinguishes between the studies that have a true effect and those that don’t with 100% accuracy. The common alpha values of 0.05 and 0.01 are simply based on tradition. For a significance level of 0.05, expect to obtain sample means in the critical region 5% of the time when the null hypothesis is true. In these cases, you won’t know that the null hypothesis is true but you’ll reject it because the sample mean falls in the critical region. That’s why the significance level is also referred to as an error rate!

This type of error doesn’t imply that the experimenter did anything wrong..."

 

My understanding of selection of p values is that this selection should be based on your confidence level in how often you expect your experiments to be reasonably accurate.  Do you expect 95% of your experiments to produce significant results?  Do you expect only 5% of the experiments to fail in some manner?  I think not.  I think it might be closer to 85%, or maybe just 80%, possibly even lower.  To be generous, I would select 85% confidence which equates to a p value of 0.15.

 

I might not be a statistician, but I have put some thought into this.  And I did get A's in calculus and statistics and every other course back in college.  Of course, that was many many many brain cells ago........



#45 Brauer

Brauer

    Frequent Member

  • Members
  • PipPipPipPip
  • 1857 posts
  • Location1 mile north of Boston

Posted 23 August 2016 - 05:56 PM

Considering p values as percent can be misleading, because it often leads to confusion that it is a simple representation of how often the result is correct or predictable. Instead, I find it helpful to think of the p value as an allowance for the expected deviation from a perfect result due to influences that give false positive or negative results. I've always found a specific, simple way to think of the value of p elusive, though.

33,3% of the choices of a given beer are random, for an infinite number of tests of identical samples. However, as the sample size gets smaller, we expect the deviation from a perfect result to get larger. We want to avoid this leading us to falsely believe that this error is meaningful. For example, I wouldn't be surprised if a coin flipped only 4 times gave me 3 heads and 1 tail. If I didn't allow for the chance of non-perfect distribution in my small sample, I might conclude that a coin has 4 sides: 3 with heads and 1 with tails.

Perhaps a more revealing way to look at the meaning of p≤0.05 specifically for a triangle test, in terms of percent, is to consider that a completely random result would mean 33.3% tasted the difference, but at p≤0.05% we are requiring ~50% of the testers to be able to taste the difference to conclude that they are actually different.

That means that, at p≤0.05, we are only requiring ~25% of tasters to be able to distinguish the beers in order for them to test as significantly different, considering the number of random votes. In other words, at that level of significance and with this number of tasters, we are allowing 75% of the choices to be completely random yet we would still consider the beers different.

P≤0.05 is not a magic number, and it can be a bit of a scientific crutch, but it has come to be a widely accepted minimum criterion and it is a looser !evel of stringency than the 95% would suggest.

#46 Big Nake

Big Nake

    Comptroller of Forum Content

  • Patron
  • PipPipPipPipPipPip
  • 53517 posts

Posted 23 August 2016 - 06:01 PM

I like beer. :lol:

#47 Brauer

Brauer

    Frequent Member

  • Members
  • PipPipPipPip
  • 1857 posts
  • Location1 mile north of Boston

Posted 23 August 2016 - 06:11 PM

I like beer. :lol:

But we are talking about beer!

The 4th paragraph is probably the best one to read if you find the rest TLDR.

#48 Big Nake

Big Nake

    Comptroller of Forum Content

  • Patron
  • PipPipPipPipPipPip
  • 53517 posts

Posted 23 August 2016 - 06:35 PM

But we are talking about beer!

The 4th paragraph is probably the best one to read if you find the rest TLDR.

Only kidding. This is my standard answer when the water gets too deep. I like hearing about all of the scientific things that go on in brewing up until the point that I realize that I have no idea what's being talked about and then I check out. I leave the technical stuff to you guys. :D

#49 dmtaylor

dmtaylor

    Advanced Member

  • Members
  • PipPipPip
  • 325 posts
  • LocationTwo Rivers, WI

Posted 23 August 2016 - 09:04 PM

Hmm.  I've thought about this some more.  I'll stand by my previous argument, but I think there's one thing we can all agree on: We really need to be testing with the maximum population possible.  If most tests have about 10 or 12 tasters, try to get 20 or 25 for the future, or even 30 or 40 or 50 if possible.  The more the better and more reliable, and the fewer percentage of folks that would need to identify the odd beer correctly for significance.  The world is full of beer geeks who think they know how to tell the difference between Pliny and cat piss.  So have as many folks as possible volunteer and prove it.  More people may equate with more pain, but hey, we have Cons and Conns to help us with that.  I also have 4 cats who would love to volunteer.  ;)



#50 Brauer

Brauer

    Frequent Member

  • Members
  • PipPipPipPip
  • 1857 posts
  • Location1 mile north of Boston

Posted 24 August 2016 - 04:21 AM

Hmm. I've thought about this some more. I'll stand by my previous argument, but I think there's one thing we can all agree on: We really need to be testing with the maximum population possible. If most tests have about 10 or 12 tasters, try to get 20 or 25 for the future, or even 30 or 40 or 50 if possible. The more the better and more reliable, and the fewer percentage of folks that would need to identify the odd beer correctly for significance. The world is full of beer geeks who think they know how to tell the difference between Pliny and cat piss. So have as many folks as possible volunteer and prove it. More people may equate with more pain, but hey, we have Cons and Conns to help us with that. I also have 4 cats who would love to volunteer. ;)

One useful exercise, once you collect the first set of data, is to see how many data points would be required for that ratio of answers to be sufficient to test as significantly different. This is how the sample size is determied for lot of important scientific studies. Of course, it is quite possible that the data will move more toward random as more data is collected.

I did a casual analysis of some of their data and it looked like maybe 60 tasters might be an approriate number. When Brülosophy did collect 39 votes for one of their fermentation experiments the results were virtually random, but I was surprised they could only get 39 tasters at NHC.

#51 denny

denny

    Living Legend

  • Members
  • PipPipPipPip
  • 9090 posts
  • LocationEugene OR

Posted 24 August 2016 - 12:51 PM

Hmm.  I've thought about this some more.  I'll stand by my previous argument, but I think there's one thing we can all agree on: We really need to be testing with the maximum population possible.  If most tests have about 10 or 12 tasters, try to get 20 or 25 for the future, or even 30 or 40 or 50 if possible.  The more the better and more reliable, and the fewer percentage of folks that would need to identify the odd beer correctly for significance.  The world is full of beer geeks who think they know how to tell the difference between Pliny and cat piss.  So have as many folks as possible volunteer and prove it.  More people may equate with more pain, but hey, we have Cons and Conns to help us with that.  I also have 4 cats who would love to volunteer.   ;)

 

This is the main difference between Experimental Brewing and Brulosophy.  We try to get multiple brewers and they each hold a tasting, resulting in a larger pool of data.  It also helps to account for possible issues n the brewing.  This is no in any way meant to diss Brulosophy..we just do things different.


One useful exercise, once you collect the first set of data, is to see how many data points would be required for that ratio of answers to be sufficient to test as significantly different. This is how the sample size is determied for lot of important scientific studies. Of course, it is quite possible that the data will move more toward random as more data is collected.

I did a casual analysis of some of their data and it looked like maybe 60 tasters might be an approriate number. When Brülosophy did collect 39 votes for one of their fermentation experiments the results were virtually random, but I was surprised they could only get 39 tasters at NHC.

 

Hey, YOU try it....there's a lot of other stuff going on!  ;)



#52 Brauer

Brauer

    Frequent Member

  • Members
  • PipPipPipPip
  • 1857 posts
  • Location1 mile north of Boston

Posted 25 August 2016 - 03:20 AM

Hey, YOU try it....there's a lot of other stuff going on! ;)

Hah! No thanks! I certainly understand the problem, but when I saw that they were running these at NHC, I expected to see something like 100 tasters and was just surprised that they only managed 39. It does show how difficult it is to get large sample sizes. It turns out that it would have been a waste of time trying to gather all those votes, since their Lager yeast fermentation experiment results were so clear.

Edited by Brauer, 25 August 2016 - 03:22 AM.


#53 Brauer

Brauer

    Frequent Member

  • Members
  • PipPipPipPip
  • 1857 posts
  • Location1 mile north of Boston

Posted 25 August 2016 - 03:42 AM

Only kidding. This is my standard answer when the water gets too deep. I like hearing about all of the scientific things that go on in brewing up until the point that I realize that I have no idea what's being talked about and then I check out. I leave the technical stuff to you guys. :D

I know you're joking, I was too! One of the cool things about beer is the multiple levels from which it can be appreciated. Everything from "what a pretty color" to "the light absorbance at 430 nm is..."

#54 Genesee Ted

Genesee Ted

    yabba dabba doob

  • Moderators
  • PipPipPipPipPip
  • 49850 posts
  • LocationRochester, NY

Posted 25 August 2016 - 07:02 AM

I think best place to source tasters would be to ask BJCP. Our chapter would be all over this.

#55 denny

denny

    Living Legend

  • Members
  • PipPipPipPip
  • 9090 posts
  • LocationEugene OR

Posted 25 August 2016 - 08:47 AM

I think best place to source tasters would be to ask BJCP. Our chapter would be all over this.

 

But how many would you have?  And simply being BJCP is no guarantee of accuracy.



#56 positiveContact

positiveContact

    Anti-Brag Queen

  • Patron
  • PipPipPipPipPipPip
  • 68886 posts
  • LocationLimbo

Posted 25 August 2016 - 09:11 AM

But how many would you have?  And simply being BJCP is no guarantee of accuracy.

 

I think it is more likely to guarantee people that will take it seriously.  you guys would know better than me though.



#57 neddles

neddles

    No Life

  • Patron
  • PipPipPipPipPip
  • 16519 posts

Posted 25 August 2016 - 09:36 AM

I think best place to source tasters would be to ask BJCP. Our chapter would be all over this.

 

But how many would you have?  And simply being BJCP is no guarantee of accuracy.

I took his point to be that you would have people who were motivated to do tastings and who would benefit by practice… and not that there would be a sense of increased accuracy.



#58 Genesee Ted

Genesee Ted

    yabba dabba doob

  • Moderators
  • PipPipPipPipPip
  • 49850 posts
  • LocationRochester, NY

Posted 26 August 2016 - 10:52 AM

I took his point to be that you would have people who were motivated to do tastings and who would benefit by practice… and not that there would be a sense of increased accuracy.

Exactly. That being said, the more data gathered, the more accurate the trend you see when you eliminate the outliers.

#59 Brauer

Brauer

    Frequent Member

  • Members
  • PipPipPipPip
  • 1857 posts
  • Location1 mile north of Boston

Posted 26 August 2016 - 07:50 PM

I took his point to be that you would have people who were motivated to do tastings and who would benefit by practice… and not that there would be a sense of increased accuracy.

Besides, from Brülosophy's analyses, it doesn't seem that judges are any better than beer aficionados at detecting which beer is different in a triangle test.

#60 denny

denny

    Living Legend

  • Members
  • PipPipPipPip
  • 9090 posts
  • LocationEugene OR

Posted 27 August 2016 - 09:04 AM

Besides, from Brülosophy's analyses, it doesn't seem that judges are any better than beer aficionados at detecting which beer is different in a triangle test.

 

Based on the many triangle tests I've done, I heartily agree.  Plus, if the difference is so small that you have to have a trained taster to pick it out, does it matter?




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users