#prompt-engineering
https://openreview.net/forum?id=92gvk82DE-
> [bugglebeetle](https://news.ycombinator.com/user?id=bugglebeetle) [38 minutes ago](https://news.ycombinator.com/item?id=35507089) | [next](https://news.ycombinator.com/item?id=35506472#35506900) [–]
>
> I can’t find the link to the paper right now, but after reading about how LLMs perform better with task breakdowns, I vastly improved my integrations by having ChatGPT generate prompts that decompose a general task into a series of tasks based on a sample input and output. I haven’t needed to make a self-refining system (one or two rounds of task decomposition and refinement resulted in the expected result for all inputs), but I would assume this is fairly trivial and that AIs can do it better than humans.
>
This is also an area where I expect OpenAI will continue to demolish the competition. The ability to recursively generate and process large prompts is truly nuts. I tried swapping in some of the “high-performing” LLama models and they all choked on anything more than a paragraph.