seangoedecke.com RSS feed
Staff engineer writing about LLMs, technical leadership, and engineering at scale.
seangoedecke.comseangoedecke.com RSS feed is an independent blog covering engineering leadership and ai & machine learning. It publishes on a weekly or bi-weekly basis, with 31 posts in its archive.
Regular
Publishes weekly or bi-weekly
0
Independent Blog
English
How this blog's content is accessed through Blogs Are Back.
Full Content
RSS feed includes complete post content for reading in-app
Proxy Required
Feed is fetched through our proxy for browser compatibility
Proxy Post Links
Post pages are loaded through our proxy for compatibility
Embeddable
Posts can be displayed inline in the reader view
Recent posts from seangoedecke.com RSS feed's RSS feed.
Insider amnesia
Speculation about what’s really going on inside a tech company is almost always wrong. When some problem with your company is posted on the internet, and you read people’s thoughts on it, their thoughts are almost always ridiculous. For instance, they might blame product managers for a particular decision, when in fact the decision in question was engineering-driven and the product org was pushing back on it. Or they might attribute an incident to overuse of AI, when the system in question was...
What's so hard about continuous learning?
Why can’t models continue to get smarter after they’re deployed? If you hire a human employee, they will grow more familiar with your systems over time, and (if they stick around long enough) eventually become a genuine domain expert. AI models are not like this. They are always exactly as capable as the first moment you use them. This is because model weights are frozen once the model is released. The model can only “learn” as much as can be stuffed into its context window: in effect, it can ta...
LLM-generated skills work, if you generate them afterwards
LLM “skills” are a short explanatory prompt for a particular task, typically bundled with helper scripts. A recent paper showed that while skills are useful to LLMs, LLM-authored skills are not. From the abstract: Self-generated skills provide no benefit on average, showing that models cannot reliably author the procedural knowledge they benefit from consuming For the moment, I don’t really want to dive into the paper. I just want to note that the way the paper uses LLMs to generate skills is...
Two different tricks for fast LLM inference
Anthropic and OpenAI both recently announced “fast mode”: a way to interact with their best coding model at significantly higher speeds. These two versions of fast mode are very different. Anthropic’s offers up to 2.5x tokens per second (so around 170, up from Opus 4.6’s 65). OpenAI’s offers more than 1000 tokens per second (up from GPT-5.3-Codex’s 65 tokens per second, so 15x). So OpenAI’s fast mode is six times faster than Anthropic’s1. However, Anthropic’s big advantage is that they’re servin...
On screwing up
The most shameful thing I did in the workplace was lie to a colleague. It was about ten years ago, I was a fresh-faced intern, and in the rush to deliver something I’d skipped the step of testing my work in staging1. It did not work. When deployed to production, it didn’t work there either. No big deal, in general terms: the page we were working on wasn’t yet customer-facing. But my colleague asked me over his desk whether this worked when I’d tested it, and I said something like “it sure did, n...
Follow seangoedecke.com RSS feed
Add this blog to your reading list on Blogs Are Back, or visit the blog directly.