0 users browsing Discussion. | 1 guest | 40 bots  
    Main » Discussion » Something about cheese!
    Pages: First Previous 6 7 8 9 10 11 12 Next Last
    Posted on 19-07-02, 16:22
    Post: #53 of 202
    Since: 11-01-18

    Last post: 660 days
    Last view: 16 days
    you are suggesting that applying the model against the dataset that helped produced the model will give you a different result than in the original paper?
    Posted on 19-07-02, 18:04
    Stirrer of Shit
    Post: #458 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    For a liberal definition of data set, yes.

    The model is applied to the top 100 players, and purportedly explains why there is a score gap between the male and female players without having to reach the conclusion that it's because female players are worse.

    But if you'd apply it to the whole data set, then the explanation provided ("there are more male players than female, ergo there must be more good male than good female players") would fall apart, since the average male player is also far better than the average female player. If the model still predicts a score gap, the model doesn't model what it claims to model since that would indeed imply female players are worse, and if it doesn't, it's wrong, since there is such a gap in the data.

    Both of these outcomes would falsify the study, no?

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-07-03, 00:04
    Post: #54 of 202
    Since: 11-01-18

    Last post: 660 days
    Last view: 16 days
    Feeding a larger data set into the model, let say using numpy, or R would help in assessing the viability of the model, but you'd have to explain where exactly the model fails in more detail, beyond working backwards from men are better at chess.

    maybe wertigon should also crunch these numbers since this isn't a high school debate.
    Posted on 19-07-03, 00:42
    Stirrer of Shit
    Post: #459 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    I wouldn't think there's any need to apply the model in this case. If you're okay with assuming it's not completely broken, it stands to reason that the player with a rank of 1/2n would have the median score, no matter the value of n.

    And if it would indeed predict this, then this wouldn't match up with the data, since the average man is better than the average woman, with a gap of 300-odd points (see graph above), both for mean and median.

    The data set is publicly available, the problem is the model which requires you to compute values far too large for LibreOffice or Octave to handle.

    Python too, which I think would rule out Numpy unless it can handle calculations better:
    >>> ((Mean + Magic1*Sigma) + Magic2*Sigma*(math.factorial(Count_M)/math.pow(math.factorial((Count_M-Rank)),Rank))*(math.log(Count_M)-(round(Gamma+math.log(Rank-1)))))-((Mean + Magic1*Sigma) + Magic2*Sigma*(math.factorial(Count_F)/math.pow(math.factorial((Count_F-Rank)),Rank))*(math.log(Count_F)-(round(Gamma+math.log(Rank-1)))))
    Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    OverflowError: long int too large to convert to float


    And with the optimized version:
    >>> ((Mean + Magic1*Sigma) + Magic2*Sigma*math.exp(0.5*(math.log(2)+math.log(math.pi)+math.log(Count_M))+Count_M*math.log(Count_M)-1 - 0.5*(math.log(2)+math.log(math.pi)+math.log((Count_M-Rank))+(Count_M-Rank)*math.log((Count_M-Rank))-1 + Rank*math.log(Count_M)))*(math.log(Count_M)-(round(Gamma+math.log(Rank-1)))))-((Mean + Magic1*Sigma) + Magic2*Sigma*math.exp(0.5*(math.log(2)+math.log(math.pi)+math.log(Count_F))+Count_F*math.log(Count_F)-1 - 0.5*(math.log(2)+math.log(math.pi)+math.log((Count_F-Rank))+(Count_F-Rank)*math.log((Count_F-Rank))-1 + Rank*math.log(Count_F)))*(math.log(Count_F)-(round(Gamma+math.log(Rank-1)))))
    Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    OverflowError: math range error


    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-07-03, 02:08 (revision 2)
    Post: #55 of 202
    Since: 11-01-18

    Last post: 660 days
    Last view: 16 days
    you don't need to put the entire formula in a single line.

    math.factorial(Count_M)/math.pow(math.factorial((Count_M-Rank)),Rank))


    that seems off. is it not n!/((n-k)! * nk)?

    thats also probably where the overflows are coming from. you can reduce the size of the number that produces with a bit of work, but not easily expressed in a single line.
    Posted on 19-07-03, 11:45
    Stirrer of Shit
    Post: #460 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Posted by funkyass
    you don't need to put the entire formula in a single line.

    math.factorial(Count_M)/math.pow(math.factorial((Count_M-Rank)),Rank))


    that seems off. is it not n!/((n-k)! * nk)?

    thats also probably where the overflows are coming from. you can reduce the size of the number that produces with a bit of work, but not easily expressed in a single line.

    It is valid, just a bit clumsily expressed with double parens.
    And yes, you can approximate ln(n!), but eventually you'll still have to calculate exp(ln(n!)-ln((n-k)!)+ln(n)*k), which causes the overflow.

    I don't know what the model's good for. Because the expected value of a player with rank k/n can be calculated by just plugging the quantile into the probit function, no complicated math needed.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-07-03, 20:31
    Post: #78 of 205
    Since: 11-24-18

    Last post: 156 days
    Last view: 27 days
    Seems like you still do not understand how the math works. It is pretty simple, really:

    https://en.wikipedia.org/wiki/Order_statistic#Probability_distributions_of_order_statistics

    This is, in other words, a standard function. However, it cannot be used directly due to the extreme calculations that must be made, it must be sped up. That's why the final approximate form is:

    [math]E_{n,k} \approx (\my + c_1 \sigma) + c_2 \sigma \frac{n!}{(n-k)!n^k}(\ln{n} - H(k - 1))[/math]

    (If that isn't showing up properly, find a latex editor)

    Sympy could help with the bigger calculations, but more specifically try out this:

    https://www.sympygamma.com/input/

    Of course, it is a bit of manual repetitive labor to input 100 different equations, but maybe you could simplify the equation if you plug in some values.
    Posted on 19-07-03, 21:15 (revision 1)
    Stirrer of Shit
    Post: #462 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Posted by wertigon
    Seems like you still do not understand how the math works. It is pretty simple, really:

    https://en.wikipedia.org/wiki/Order_statistic#Probability_distributions_of_order_statistics

    This is, in other words, a standard function.

    Okay, so they assume they're dealing with a uniform distribution (e.g. of probabilities) and then use this math to get a value between 0 and 1 which they then plug in to the inverse cdf.

    But why can't they use the inverse cdf directly? They're calculating an expected value for each rank, not using the kth-order pdf for it for anything interesting. Shouldn't the expected value for this rank be closely approximated by cdf-1((rank+1/2)/total)?

    Intuitively, the top player among ten players is better than 90-100% of them, the second best is better than 80-90%, ..., the tenth best is better than 0-10%. And you'd need these statistics to figure out which of these values are more likely, but for large n (say, n > 100) the difference is very small. Considering Elo scores only have four significant digits, possibly less, this approximation shouldn't affect the accuracy much, and makes it far easier to reason about.

    It holds up in testing, so why isn't it correct?

    Sympy could help with the bigger calculations, but more specifically try out this:

    https://www.sympygamma.com/input/

    Of course, it is a bit of manual repetitive labor to input 100 different equations, but maybe you could simplify the equation if you plug in some values.

    I tried it with rank 3000, and it claimed the answer was 3.39959172783327 * 10143981. How exactly am I supposed to input it?
    Tried this:
    ((1500 + 1.25*350) + 0.3*350*exp(0.5*(log(2)+log(pi)+log(60000))+60000*log(60000)-1 - 0.5*(log(2)+log(pi)+log((60000-3000))+(60000-3000)*log((60000-3000))-1 + 3000*log(60000)))*(log(60000)-(ceiling(log(3000-1)))))-((1500 + 1.25*350) + 0.3*350*exp(0.5*(log(2)+log(pi)+log(6000))+6000*log(6000)-1 - 0.5*(log(2)+log(pi)+log((6000-3000))+(6000-3000)*log((6000-3000))-1 + 3000*log(6000)))*(log(6000)-(ceiling(log(3000-1)))))

    EDIT: oops, forgot a / in a closing tag

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-07-03, 21:23 (revision 1)
    Post: #79 of 205
    Since: 11-24-18

    Last post: 156 days
    Last view: 27 days
    My guess is they do not wish to use that in order to create an independent prediction from the dataset.

    When creating a model you wish to make it as generic as possible. Ever heard of the term overfitting before? Quite common in the realm of A.I.

    As for inputting, well, something is very wrong in your calculations, you should get a value between 0 and 500 roughly.
    Posted on 19-07-03, 21:46 (revision 1)
    Stirrer of Shit
    Post: #463 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    This would work for any data set though, wouldn't it? I don't see how it's overfitting. They assume it's a normal distribution already, so this would be no change from how things were before.

    EDIT: And of course, the two models should still give the same predictions around the median regardless of any inaccuracy, which is my main gripe with it.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-07-04, 20:50
    Post: #80 of 205
    Since: 11-24-18

    Last post: 156 days
    Last view: 27 days
    Not sure how you get your formula for the model so messed up. To be clear, the formula should be:

    µ + sigma (c1 + (c2 n! (ln n - H(k-1))) / ((n-k)!nk))

    Where H(k-1) = 1 + 1/2 + 1/3 + ... + 1/(k-1).

    Incidentally, H(n) can be written as follows:

    H(n) = log10 n + 33841/58628 + 1/2n - 1/12n2

    With this it should be possible to reach a good enough accuracy with a symbolic calculator (e.g. a calculator that understands the notion of 100! / 101!), at least for the purpose of determining the difference in ELO.
    Posted on 19-07-04, 21:32 (revision 2)
    Stirrer of Shit
    Post: #465 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Posted by wertigon
    Not sure how you get your formula for the model so messed up. To be clear, the formula should be:

    µ + sigma (c1 + (c2 n! (ln n - H(k-1))) / ((n-k)!nk))

    Where H(k-1) = 1 + 1/2 + 1/3 + ... + 1/(k-1).

    Because n! gets changed to exp(ln(n!)), where ln(n!) is a far longer expression, and H(x) gets changed to round(γ + ln(x)).

    Incidentally, H(n) can be written as follows:

    H(n) = log10 n + 33841/58628 + 1/2n - 1/12n2

    Yeah, that'd work too.

    With this it should be possible to reach a good enough accuracy with a symbolic calculator (e.g. a calculator that understands the notion of 100! / 101!), at least for the purpose of determining the difference in ELO.

    OK, so that worked. Because I'm lazy, I hardcode in the constants.
    You've got about 60k men and 6k women. Average man should be ranked 30k, average woman 3k.

    Average man:
    1500 + 350 (1.25 + (0.287 60000! (ln 60000 - (log((30000-1))/log(10) + 33841/58628 + 1/2(30000-1) - 1/12((30000-1)^2)))) / ((60000-30000)!(60000^30000)))

    = 1937.5
    Average woman:
    1500 + 350 (1.25 + (0.287 6000! (ln 6000 - (log((3000-1))/log(10) + 33841/58628 + 1/2(3000-1) - 1/12((3000-1)^2)))) / ((6000-3000)!(6000^3000)))

    = 1937.5

    So, the same averages. This is not what the dataset indicates, as per the graph. The average woman is far worse than the average man, by about one standard deviation.

    Do you see my gripe with the model and broader study now?

    EDIT: Oops, wrong formula

    EDIT2: Huh, how can the average be 1937 when I put it down as 1500? See, this is why psychologists shouldn't play around with statistics.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-07-05, 03:45
    Post: #56 of 202
    Since: 11-01-18

    Last post: 660 days
    Last view: 16 days
    have you looked into how the chess scoring system works?
    Posted on 19-07-05, 07:46
    Post: #81 of 205
    Since: 11-24-18

    Last post: 156 days
    Last view: 27 days
    Not quite an expert at this particular math and model, but if your calculations show up wrong it is probably because you made a mistake along the way.

    When I cannot solve a piece like this I usually turn towards a university and send an email to one of my old professors. For them, it doesn't take long to solve those particular models. Perhaps you should ask them?

    Posted on 19-07-05, 08:49
    Stirrer of Shit
    Post: #467 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Posted by funkyass
    have you looked into how the chess scoring system works?

    Yeah, sure. Players gain or lose rank based on how unlikely the win was. So if someone with a low score plays against someone with a high score and wins 50/50, they'll converge rapidly. Whereas, if the player with a high score wins there's pretty much no change.
    In practice, you can just assume that they already have converged for all players who aren't complete rookies.
    Posted by wertigon
    Not quite an expert at this particular math and model, but if your calculations show up wrong it is probably because you made a mistake along the way.

    When I cannot solve a piece like this I usually turn towards a university and send an email to one of my old professors. For them, it doesn't take long to solve those particular models. Perhaps you should ask them?

    I wouldn't think there's anything wrong with the calculations. The replication study also said it gave very high results, like estimating the top German player above the current world champion. Plugging in k = 7, it says that the gap according to the model would be 228-298 points, depending on your values for n. This is about what the study finds too.

    If you're saying the model is horribly broken and does not accurately reflect reality, then that makes sense. I've yet to understand what was wrong with the far simpler model that can be calculated on a better pocket calculator.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-07-05, 20:19
    Post: #82 of 205
    Since: 11-24-18

    Last post: 156 days
    Last view: 27 days
    Posted by sureanem
    I wouldn't think there's anything wrong with the calculations. The replication study also said it gave very high results, like estimating the top German player above the current world champion. Plugging in k = 7, it says that the gap according to the model would be 228-298 points, depending on your values for n. This is about what the study finds too.

    If you're saying the model is horribly broken and does not accurately reflect reality, then that makes sense. I've yet to understand what was wrong with the far simpler model that can be calculated on a better pocket calculator.


    I think that your main mistake is that you simply do not rebase sigma, µ, c1 and/or c2 between the two pools, but not an expert as I said.

    As you note, it doesn't make sense that two different pools with two different skills have the same predicted mean. That is way off, like waaaay off.
    Posted on 19-07-05, 22:39
    Stirrer of Shit
    Post: #468 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    The study doesn't do that either though. c1 and c2 are poorly explained magic constants, m and s are stated as taken for the whole population. If I'd change the model to have different means for men and women, it just seems like a convoluted way to restate my original statement (women are worse at chess than men) - it would be patently absurd to claim that the difference in skills is accounted for by... a difference in skills, and that this proves men are not more skilled than women.

    Just to be clear: the claim the study makes is that differences in means for the two populations do not explain the skill gap, but rather that the skill gap is solely (or to 96%, anyway) explained by the fact that you've got more men playing chess than you've got women, which means you'd get more extreme scorers and so on and so forth, but that this only holds on the tails.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-07-08, 11:24
    Post: #83 of 205
    Since: 11-24-18

    Last post: 156 days
    Last view: 27 days
    You still do not understand the methodology. The model *should* come up with different means. There is a difference in skill, because there is a difference in "pool size".

    Here, let me simplify the math for you. The advanced math you see is to deal with statistics and smoothen the curves. That is why it's there.

    The hypothesis is that the skill difference between the k:th persons of the two pools correlate to their respective pool size, yes?

    Suppose we have a total population size (Z) of 1000 individuals. In this pool size, we have 900 people from one group (X), and 100 people from a different group (Y).

    For a simplistic model, if a correlation exists between skill difference and pool size, and it is linear, the model could be something like: Dn = c1 * n + c2

    Where c1 and c2 are calculated from the ratio between X and Y.

    If no difference exist, X and Y should have the same mean. But since they do, X and Y should have different means, especially compared to Z. This is why I think you made a mistake in your model calculations.

    Either way a correlation has been proven. Perhaps a ML method could shed further light upon this. But, yes. I think we have reached the end for now.
    Posted on 19-07-08, 12:30
    Stirrer of Shit
    Post: #477 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Posted by wertigon
    The hypothesis is that the skill difference between the k:th persons of the two pools correlate to their respective pool size, yes?

    No, only for the upper echelons. The paper doesn't put forth that hypothesis at all. Its claim is that, if you have 1000 men and 100 women, the top 10 men will have a higher average score than the top 10 women, because the top 10 men represent the 99th percentile but the top 10 women only the 90th.

    Suppose we have a total population size (Z) of 1000 individuals. In this pool size, we have 900 people from one group (X), and 100 people from a different group (Y).

    For a simplistic model, if a correlation exists between skill difference and pool size, and it is linear, the model could be something like: Dn = c1 * n + c2

    Where c1 and c2 are calculated from the ratio between X and Y.

    How's that work then? You become less skilled by being in a smaller group?

    That makes no sense. If you give a country with 10 million people some test, and each region has 1 million people, then those regions should have a lower average score by virtue of having a smaller population than the country as a whole. This is not mathematically possible.

    Another example: in the PISA rankings, Singapore and Finland both score highly despite being rather small countries.

    If no difference exist, X and Y should have the same mean. But since they do, X and Y should have different means, especially compared to Z. This is why I think you made a mistake in your model calculations.

    You can't just do that though. The study uses the same mean for them. If they didn't do that then the model wouldn't be consistent with their claims, see above.

    Either way a correlation has been proven. Perhaps a ML method could shed further light upon this. But, yes. I think we have reached the end for now.

    How'd ML help? It'd just be a roundabout way of doing regression.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Posted on 19-07-08, 16:56
    Stirrer of Shit
    Post: #479 of 717
    Since: 01-26-19

    Last post: 1763 days
    Last view: 1761 days
    Primaries are coming up. Republican ones won't be very interesting to follow, but what about the Democrats'?

    Specifically, will they go for Harris or Sanders?

    Obviously, Biden would be electable like nothing else. He'd win the general against Trump with near absolute certainty. But the damage done to the party would be far too great. After having alienated the socially moderate fiscally liberate wing (Sanders), they'd proceed to alienate the socially liberal fiscally moderate wing (Clinton), leaving them without any base. This kills the party.

    Consider that in ten years or so the Democrats'll have it all locked down. TX was just a few percent from going blue, and as they saying goes, once you go blue, you never go back. They wouldn't want to risk this just to get a president in when they know they can just wait a few years and have that whole thing locked down.

    Also consider that Trump isn't doing much to prevent this or really anything which risks their electoral prospects in the long term (e.g. citizenship question on census, deportations). And with the coming recession, they've got him right where they'd want him to be. With Trump in the White House, it's trivial to blame the recession on his trade war, and then pull a repeat of 2008.

    It follows then that who they're picking to run for President will not really factor in electability, since they'd rather outright throw this one and make their move in 2024 when they've got a perfect storm. This rules out Biden, unless their decision-makers have an extraordinarily low time preference.

    The opposite of Biden, who'd be quite unelectable, but do a very good job of bringing young people back into the fold, is of course Sanders. With him, they can both have their demographic cake and eat it. Someone prone to wild speculation could even theorize they cut such a deal with him way back in 2016 so that he wouldn't stir up such a ruckus conceding, which a lot of people found quite odd.

    So I'd guess they'll go with Sanders. The flip side is of course if the decision-makers would prefer rushing to force their loyalist through. And I'm not sure if the Zeitgeist is such. There was a lot of internal hype over Clinton back in 2016 because she was the obvious successor to Obama, which there doesn't seem to be for Harris. On the other hand, it's possible they'd want to double down and go with the candidate the most dissimilar to Trump they can find. Which Sanders obviously isn't - just look at the large numbers of people who are bullish on Sanders and Trump but bearish on Romney and Clinton.

    There was a certain photograph about which you had a hallucination. You believed that you had actually held it in your hands. It was a photograph something like this.
    Pages: First Previous 6 7 8 9 10 11 12 Next Last
      Main » Discussion » Something about cheese!
      This does not actually go there and I regret nothing.