I’ve used spicy auto-complete, as well as agents running in my IDE, in my CLI, or on GitHub’s server-side. I’ve been experimenting enough with LLM/AI-driven programming to have an opinion on it. And it kind of sucks.
Or even distinguish between two versions of the same library. Absolutely stupid that LLMs default to writing deprecated code just because it was more common in the training data.
It will if you explicitly ask it to. Otherwise it will either make stuff up or use some really outdated patterns.
I usually start by asking Claude code to search the Internet for current best practices of whatever framework. Then if I ask it to build something using that framework while that summary is in the context window, it’ll actually follow it
deleted by creator
Or even distinguish between two versions of the same library. Absolutely stupid that LLMs default to writing deprecated code just because it was more common in the training data.
So much this. It’s even more annoying when you fix them and paste it back just for it to ignore it lol.
It will if you explicitly ask it to. Otherwise it will either make stuff up or use some really outdated patterns.
I usually start by asking Claude code to search the Internet for current best practices of whatever framework. Then if I ask it to build something using that framework while that summary is in the context window, it’ll actually follow it