lichess.org
Donate

King of the Hill computer analysis

Oh, and since it came up, and I like talking about these things, it's actually a huge misconception that modern engines got their evaluations from human masters and the distilled wisdom of human practice.

The strongest engines look at some of the same board features as humans do, but the main reason modern engines are so strong is the massive testing and tuning at super fast time controls.

For a long time humans (incorrectly) assumed that the way to make engines play chess well was to figure out what GMs did, and then translate that for the computer.

Eventually some brave souls thought "Why make engines think like humans? Why not just make small changes, and test that change by playing tens of thousands of ultra fast games against itself?", and then things were off to the races.

Even Komodo, which is about as close as you get to a top engine whose evaluations come straight from a human player, modifies and tunes the values based on massive testing.

That's just the way it works. Humans aren't very good at predicting what will work for a computer, so instead we just try some idea, see if it gains or loses a couple ELO, and repeat.

Anyway, it's not surprising that the same things that work for computers don't work for humans, and vice versa. Machines still don't do all that well at walking, but boy can they roll :)

Stockfish's regular chess evaluation weights, for example, have basically no ties to human play. It's just a highly tuned and tested bit of software, thanks especially to the distributed testing framework.

Off my soap box now :)
Thanks, I'll see if SF does anything similar to your analysis & if so, modify it to do as you describe. (If not, I'll still try to add that analysis; however, SF is very well-optimized and necessarily very complex, so adding even simple features is proving to be quite challenging!)
Excellent points OneOfTheQ. I agree, there is a gross misconception that computers have "mastered" chess the same way that humans have. In particular Kasparov's match loss to Deep Blue has convinced many Americans that chess is no longer worth watching, which is quite sad.

I'd actually be fascinated with a more "human-like" chess engine, although technology hasn't yet advanced to a point where this is practical.
@OneOfTheQ check how computer softwares from 70s and 80s were playing against grandmasters you will get an idea. GMs always played closed positions where machines can't calculate very deep and than slowly and gradually crushed them to death.

Modern engines are way too different. one tactical mistake on human side and game is all over as they never let advantage go. they are soul less so they dont get tired and most of all they dont evaluate/analyze/understand anything they just do math.
@Tuskerking: I'm well aware of how engines played then.

I'm not sure what the point is relative to mine, though :)

@ OneOfTheQ

<<it's actually a huge misconception that modern engines got their evaluations from human masters and the distilled wisdom of human practice.>>

like rybka was designed by an IM and his own chess understanding played a useful role in evolution of strength of rybka.
@tuskerking:

I foresaw such a point, and addressed that in my comment about Komodo (shared human source, Kaufman, in both cases).

Anyway, the fact that Rajlich, an IM, wrote Rybka hardly means that Rybka was designed to mimic human thinking.

In fact, Rajlich was one of the first to embrace the idea of playing thousands and thousands of games at very fast time controls to test changes, so now we're back to to thorough testing of small changes.

Even in those engines, while the human provided a lot of ideas, those ideas have thousands of specific implementations in terms of exact weighting of the evaluation terms.

Those weights are NOT decided by the human's intuition, but by massive testing.

Many other engines (like Stockfish) don't have strong humans on their team, and just massively test different ideas and weights.

The testing is the key, not the input from the human player. Basing engine evaluations on the results of large-scale testing and not on what humans think would make sense is part of the reason engines have gotten so much better.

It's just a fact that strong human input is not necessary, and sometimes not even helpful. Human brains don't work like computers do, so even if human professionals had perfect insight into the workings of their brain (which they don't), that wouldn't necessarily help engines at all.

It's similar to how in Advanced/Centaur chess the teams that do the best are rarely very strong GMs with an engine. They're usually some fairly weak OTB player who knows how to use the advantages of computers.

It's been a commonplace in engine programming for quite some time that clean, efficient, bug-free code and massive testing are far more important than mimicking the evaluation of human GMs.

That's just the way it is, but I understand if you still disagree. Not everyone has the time to be fully up to speed on all these things. :)
Oh, for those interested, there's a fairly famous forum post from Tord about what makes evaluation functions for engines good.

http://www.talkchess.com/forum/viewtopic.php?topic_view=threads&p=135133&t=15504

Well worth a read, and the main takeaway relevant to this discussion is the bit about how more knowledgeable evals are not the same as better evals.

Humans bring LOTS of knowledge to bear on chess, which is where we get most of our strength.

Engines' primary advantage is speed of calculation, so just dumping lots of human knowledge, even if it's correct, into an engine won't necessarily make it stronger if it slows down search too much.

I just realized that I didn't really provide any reference for something I claimed was a commonplace, so I thought I'd follow up and at least give one source :)
that is post from 2006 and lot of chances since that post... i think you are right but engines are not as good as top 10 players in the world at longer time controls. they are tested for shorter time controls that is why they are, may be, good at it but certinaly they are overrated. Recent WCC match proved that many of the rook ending lines suggested by sf actually leads to a loss and Carlsen proved it. So engines still need to learn a lot not just from testing but testing games llike 90|30 on distributed networks.
No, not much has changed since then on that front :)

Also, no, top 10 players in the world would get CRUSHED at long time controls in a match against SF on any decent hardware.

Do you also think that when Carlsen said the best computers were vastly superior to him that he was lying for some reason? :)

The claim about misevaluating rook endings is just wrong. That happens sometimes, of course, but less often than humans misevaluate them. Certainly if you're basing this off say, the incredibly handicapped SF at Chessbomb that might be true, but that's not indicative of much :)

This topic has been archived and can no longer be replied to.