Researchprompt engineeringcode generationrequirements engineeringhuman evaluation
REprompt Improves AI Code Generation Satisfaction
8.1
Relevance Score
Nanyang Technological University and East China Normal University introduce REprompt, a research framework treating prompts as requirements specifications for AI code generation. In tests on a vibe-coding platform with human evaluators, REprompt achieved satisfaction scores of 6.3 out of 7 for games and 6.5 out of 7 for utility tools, outperforming naive prompting, zero-shot chain-of-thought, and MetaGPT. The approach reframes prompting as requirements engineering to improve software outputs.



