Improving Llama2 in game 24 with memory of thought and tree of thought
Memory of Thought and Tree of Thought are innovative prompting mechanisms designed to enable large language models to self-improve without the reliance on annotated datasets or significant resource expenditure for model fine-tuning. This dissertation integrates Chain of Thought (CoT) prompting to en...
Main Author: | Zhang, Yixiang |
---|---|
Other Authors: | Lihui Chen |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/181809 |
Similar Items
-
Llama2 self-improvement using memory-of-thought
by: Dong, Yuxiu
Published: (2024) -
Evaluating the carbon footprint of code implementation
by: Tar, Sreeja
Published: (2024) -
Mixed halide formation in lead-free antimony-based halide perovskite for boosted CO₂ photoreduction: beyond band gap tuning
by: Lee, Jiale, et al.
Published: (2023) -
Molecular tuning for electrochemical CO₂ reduction
by: Zhang, Jincheng, et al.
Published: (2023) -
The exquisite corporealities of Leibniz: performance as embodied practice of thought and documentary praxis
by: Spackman, Helen
Published: (2013)