Page 42 - 2024S
P. 42
UEC Int’l Mini-Conference No.52 35
and generating domain-specific content. By
leveraging fine-tuning, an LLM can refine
its comprehension to align with specific re-
quirements. LLMs are built to understand
a wide variety of language constructs. Fine-
tuning enhances their utility by honing in
on domain-specific patterns. For instance,
a general-purpose LLM may not inherently
recognize subtle hints of odor references in text.
Fine-tuning enables it to do so by exposing it
to labeled examples of odor-related context,
helping it discern nuances that distinguish a
coffee aroma from a barbecue or a lavender
field from an apple orchard. By training on
curated examples, the model becomes more
adept at detecting subtle contextual cues
that signal specific odor associations, thus
enhancing its context sensitivity. To fine-tune
the model effectively, we created a specialized
dataset consisting of 840 pairs of text and their
corresponding odor labels. These pairs were
generated using ChatGPT, aiming to provide a
Figure 3: OpenAI Chat API Request Example. rich variety of scenarios. The dataset comprises
different types of data, we picked some samples
and show in Table 1.
to make the model do a precise decision. The
main prompt is in the User Input block. Where
we replace sentence to the text input. We also
provided 2 actual examples in the prompt as a Type Sentence Label
few shot in-context learning technique [10]. To Normal The aroma from Coffee
get a stable and accurate result, we set the tem- the café’s kitchen
perature to 0.25 and assign a static seed number. was inviting and
For every request we set the json_mode param- warm, drawing
eter to true, which strict the model to give a people in from
JSON decodable string response so that we can the street.
use it smoothly in our system. This approach is Not in the list Grilling the veg- Other
also the same for the other models like Claude etables, their
and Gemini. As mentioned above, we can apply smoky aroma
the same technique and strategy to those models was irresistible.
and compare their performance. Imagination Imagining the None
taste of his fa-
vorite dish, he
3.4 Fine-tuning Large Language smiled.
Models
Table 1: Dataset samples and their type.
Fine-tuning is a critical process in deep learning
where a pre-trained model is further trained on
a specific dataset to improve its performance We used 10 samples for each genre for fine-
on specialized tasks. For LLMs, fine-tuning tuning LLMs and reserved the remaining data
adapts a model that has already learned general for evaluation. Fine-tuning was conducted on
language patterns to excel in understanding OpenAI’s and Google’s large language model