'In the old world you had some grunt work that you had to do. That grunt work took time, and as you spent time in the weeds and details, you were kind of continuously synthesizing what you’d learned over and over again.'
I found this to be true for me recently with reviewing a huge customer complaint spreadsheet. Reading through the feedback myself helped things click into place in real time. While the themes and prompting summaries were technically accurate I was not synthesizing it the same way yet. Maybe would be different if I was already more familiar with the feedback and product instead of trying to synthesize first from a llm summary.
I can't quite put my finger on it, but there is some "precision" that's lost that humans are capable of in pattern matching that LLMs don't exactly get.
It's kind of like you will definitely get some pattern matching out of it, but it's hard to tell if it's the exact same patterns you would extract if you were looking for yourself, or if it's sufficient to the task, or the degree to which it's off, and if that degree ends up being material to the quality of the decision you would make.
Thanks for sharing. The framing is very helpful and I've had similar thoughts working as an engineer.
As we move up an abstraction (from python to technical English), it reminds me of the Feynman quote: "What I can't create, I do not understand". As you work, synthesis comes on the path to creation. Now you're creating at high throughout, but having to invest more retroactive effort to understand if / how your creations are useful.
A related idea I've been having is how personality / aptitudes shape people's choice of role. Many python programmers today would probably have written technical English as business analysts historically. We're in an interesting moment where the roles are being redefined, and the same genetic profiles are being re-sorted.
If AI's are creating so rapidly, what abstractions do humans need to operate at to be productive... My sense is we're all destined for QA 😅
Small example of the synthesis piece - I used to troll API documentation and app marketplaces to see what's new. That was time intensive. I've scripted that all away using Cursor. Now processing what's interesting out of the additions and updates takes all the time. That was baked into the process of research before.
Have experienced something similar. API companies should expect usage to spike as more nontechnical users can now scale activity on APIs without requiring engineers to get started (I suspect real scale will require professionals).
A nonprofit (Courtlistener) reached out to me because my personal usage patterns of their free API were flagged as "potential commercial product" and they wanted royalties lol
'In the old world you had some grunt work that you had to do. That grunt work took time, and as you spent time in the weeds and details, you were kind of continuously synthesizing what you’d learned over and over again.'
I found this to be true for me recently with reviewing a huge customer complaint spreadsheet. Reading through the feedback myself helped things click into place in real time. While the themes and prompting summaries were technically accurate I was not synthesizing it the same way yet. Maybe would be different if I was already more familiar with the feedback and product instead of trying to synthesize first from a llm summary.
10000%
I can't quite put my finger on it, but there is some "precision" that's lost that humans are capable of in pattern matching that LLMs don't exactly get.
It's kind of like you will definitely get some pattern matching out of it, but it's hard to tell if it's the exact same patterns you would extract if you were looking for yourself, or if it's sufficient to the task, or the degree to which it's off, and if that degree ends up being material to the quality of the decision you would make.
Thanks for sharing. The framing is very helpful and I've had similar thoughts working as an engineer.
As we move up an abstraction (from python to technical English), it reminds me of the Feynman quote: "What I can't create, I do not understand". As you work, synthesis comes on the path to creation. Now you're creating at high throughout, but having to invest more retroactive effort to understand if / how your creations are useful.
A related idea I've been having is how personality / aptitudes shape people's choice of role. Many python programmers today would probably have written technical English as business analysts historically. We're in an interesting moment where the roles are being redefined, and the same genetic profiles are being re-sorted.
If AI's are creating so rapidly, what abstractions do humans need to operate at to be productive... My sense is we're all destined for QA 😅
QA plus choosing a path. Feels like there’s quite a bit of fine tuning required to get exactly what you want out of things still
And knowing which paths lead to a dead end saves time too
Small example of the synthesis piece - I used to troll API documentation and app marketplaces to see what's new. That was time intensive. I've scripted that all away using Cursor. Now processing what's interesting out of the additions and updates takes all the time. That was baked into the process of research before.
Have experienced something similar. API companies should expect usage to spike as more nontechnical users can now scale activity on APIs without requiring engineers to get started (I suspect real scale will require professionals).
A nonprofit (Courtlistener) reached out to me because my personal usage patterns of their free API were flagged as "potential commercial product" and they wanted royalties lol
dead