Why is the #1 game score ~95% in every model?
Because each model produces scores on its own internal scale (different ranges, different distributions). To make the UI readable, we normalize scores so they're best interpreted as relative rankings, not whether one model's raw score is "bigger" than another's.
Why are two games close on the graph but far apart in the list?
The graph is a 2D tag-only map (compressed). The list is computed using the chosen model in its full similarity space. So the map is great for intuition and discovery, but the ranked list is the accurate ordering.
What's the difference between Community variants (Mode I–VII)?
Each mode is a different recommendation engine trained on Steam community co-play patterns. No tags are used here. The main difference is how similarity is measured, which changes whether results lean toward bigger, widely played games or tighter, more aligned neighbors. For example, starting from Hades, one mode may favor large games that many Hades players also own, while another will favor smaller roguelites that are more directly aligned, even if the shared audience is smaller. Try a couple of modes and stick with the one that matches what you mean by "similar."
Why did you make this?
Because people love great games, and most recommenders are either a black box or too generic. I wanted something high-quality, visual, and customizable, built from a stats / machine learning mindset.