Tech »  Topic »  Structured LLM Prompts Drive Better Results with COCOGEN

Structured LLM Prompts Drive Better Results with COCOGEN


by The FewShot Prompting Publication April 23rd, 2025

COCOGEN’s success hinges on two factors: using CodeLLMs and formatting prompts as Python code. Both help independently—but together, they unlock superior performance. Human evaluations back this up.

Table of Links

Abstract and 1 Introduction

2 COCOGEN: Representing Commonsense structures with code and 2.1 Converting (T,G) into Python code

2.2 Few-shot prompting for generating G

3 Evaluation and 3.1 Experimental setup

3.2 Script generation: PROSCRIPT

3.3 Entity state tracking: PROPARA

3.4 Argument graph generation: EXPLAGRAPHS

4 Analysis

5 Related work

6 Conclusion, Acknowledgments, Limitations, and References

A Few-shot models size estimates

B Dynamic prompt Creation

C Human Evaluation

D Dataset statistics

E Sample outputs

F Prompts

G Designing Python class for a structured task

H Impact of Model size

I Variation in prompts

4 Analysis

Structured Prompts vs. Code-LLMs Which component is more important ...


Copyright of this story solely belongs to hackernoon.com . To see the full text click HERE