Embedding models turn words into vectors. Words used in similar contexts end up close together in that vector space. Dog sits near puppy. Dog sits far from algorithm. Wendel is a daily game built on that geometry: guess the target word by navigating toward it through meaning.
Wendel uses EmbeddingGemma (300M), an open model from Google that outputs a 768-dimensional vector per word. Similarity between two words is the cosine of the angle between their vectors. The model was trained on many languages at once, so English, German, Arabic, and Bangla share one map. Guess Hund when the target is dog and it usually lands in the top 10. No dictionary lookup. The model learned the proximity from text.
Vertical axis is similarity to the target. Horizontal is a projection of the full 768 dimensions onto a meaning axis, after subtracting each language's average position (otherwise the plot just separates English from Arabic from Bangla). The projection is lossy. Ranks are computed against the full vectors, so trust the rank number over where a dot lands.
Cross-lingual alignment, frequency bias, the places where meaning compresses or breaks apart. All visible on every guess.
Built on Cloudflare Workers. Embeddings precomputed offline with llama.cpp. Inspired by Semantle.