ny_quant: (Default)
[personal profile] ny_quant
Leaked Internal Google Document.
https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
Понятно далеко не всё, но то что понятно очень интересно.

What Happened
At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given.

A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other.

Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

ny_quant: (Default)
ny_quant

December 2025

S M T W T F S
 12 34 56
7 89 10 111213
14 151617 181920
21 2223 24252627
28 29 3031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 31st, 2025 05:57 pm
Powered by Dreamwidth Studios