logo
P
Prompt Master

Prompt 大师

掌握和 AI 对话的艺术

Rhyming proofs

poem with rhymes prompt example

TL;DR(中文)

  • 这是一个 “proof + poetry” 的 creativity 测试:要求每行押韵,同时表达 “infinitely many primes” 的证明思路。
  • 适合用于:强格式约束下的生成(rhymes/metric/structure),以及把逻辑内容嵌入艺术表达。
  • 风险是:rhymes 满足了但 proof 不严谨;建议要求显式包含关键论证步骤,并做 self-check

Background

This prompt tests an LLM's natural language and creative capabilities by prompting it to write a proof of infinitude of primes in the form of a poem.

How to Apply(中文)

建议把约束拆开写清楚:

  • Logic constraint:必须表达 Euclid-style 的核心构造与矛盾(英文术语保留)
  • Form constraint:每行押韵;行数范围;是否要分 stanza

如果你更在意 proof 的正确性,可以要求输出末尾附一个 3-5 行的 “plain explanation”(仍然保持英文术语)。

How to Iterate(中文)

  1. 指定 rhyme scheme(例如 AABB 或 ABAB),减少模型随机性
  2. 指定行数(例如 8-12 lines),避免过长跑题
  3. self-check:列出诗中对应 proof 的关键步骤分别出现在哪几行
  4. 让模型先输出 proof outline,再输出诗(two-pass)

Self-check rubric(中文)

  • 是否满足 rhyme constraint(每行押韵)?
  • proof 的关键步骤是否齐全且自洽?
  • 是否出现数学事实错误或偷换概念?
  • 输出是否可读,且约束不互相冲突?

Practice(中文)

练习:用同样的格式约束,改写另一个简单结论的证明(例如 “sum of two even numbers is even”),并让模型输出:

  • poem
  • 3 行 plain explanation
  • 对应行号的 proof steps mapping

Prompt

Can you write a proof that there are infinitely many primes, with every line that rhymes?

Code / API

OpenAI (Python)

from openai import OpenAI

client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {
            "role": "user",
            "content": "Can you write a proof that there are infinitely many primes, with every line that rhymes?",
        }
    ],
    temperature=1,
    max_tokens=256,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0,
)

Fireworks (Python)

import fireworks.client

fireworks.client.api_key = "<FIREWORKS_API_KEY>"

completion = fireworks.client.ChatCompletion.create(
    model="accounts/fireworks/models/mixtral-8x7b-instruct",
    messages=[
        {
            "role": "user",
            "content": "Can you write a proof that there are infinitely many primes, with every line that rhymes?",
        }
    ],
    stop=["<|im_start|>", "<|im_end|>", "<|endoftext|>"],
    stream=True,
    n=1,
    top_p=1,
    top_k=40,
    presence_penalty=0,
    frequency_penalty=0,
    prompt_truncate_len=1024,
    context_length_exceeded_behavior="truncate",
    temperature=0.9,
    max_tokens=4000,
)

Reference