CSE - Making Better Large Language Models
Large language models (LLMs) are being used in many places. Our lab is looking for self-motivated undergraduate students that are interested in developing better LLMs or finding better methods to use LLMs. We study LLMs in various real-world scenarios from personalizing intelligent assistance, advancing scientific discovery, facilitating K-12 education, to improving privacy, safety, and virtue ethics. We aim to improve several types of abilities of LLMs: (a) follow complex instructions, (b) perform complex reasoning, (c) detect and reduce hallucination or other undesired behaviors, (d) understanding and generating multimodal information, (e) provide personalized responses, (f) simulate behaviors of specific population, (g) forget private or sensitive information, (h) align model behaviors with ethical standards, and (i) understand tables and charts in scientific domains.