Discussion about this post

User's avatar
PRASANNA GANESAN's avatar

I've been living through a "backbone to pyramid" journey the last year as my "drink from the Slack firehose" strategy stopped scaling, and would love for the answer to instead be "All you need are LLMs to scalably process the context". I fear that we are underestimating the level of transformation embedded in the translation. Yes, some translations are mechanical and merely a source of friction to be eliminated. OTOH, my experience has been that most innovation emerges from the minds of human translators who are able to successfully span two domains and connect dots in a way that no one contemplated before. Are LLMs -- properly harnessed -- up to that challenge? Does the workforce have the human capital to make optimal use of the LLM output? I guess we're going to find out. It may well be that the companies which master the right context engineering for this problem will gain enormous edges in human capital efficiency.

Expand full comment
Senthil Gandhi's avatar

Transformers — models that turn one sequence of symbols into another — came from the practical problem of translation at Google Translate. That was the first real-world motivation. At the time, no one thought of translation as a stand-in for intelligence. Searle’s Chinese Room gave us a hint, but I don’t recall anyone making the connection. Now it’s clearer: LLMs were built to translate. Their intelligence was an accident. Intelligence being “just” translation is the big surprise.

Expand full comment
13 more comments...

No posts

Ready for more?