LARGE LANGUAGE MODELS FOR AUTOMOTIVE EMBEDDED ELECTRONICS
Author(s) : Nesrine Mhadhbi
In this revised extended abstract, several improvements have been made to enhance clarity, methodological rigor, and coherence. The study now clearly emphasizes the application of LLMs for automotive embedded electronics, specifying the fine-tuning of pre-trained models on domain-specific datasets comprising E/E architecture models, requirements, and model behavior. The methodology has been detailed to indicate the model types considered (GPT-style and LLaMA variants) and structured around three steps: domain-specific fine-tuning, integration within Dassault Systemes’ engineering environment, and contextual learning with human-in-the-loop evaluation. An evaluation phase has been incorporated to assess relevance, usability, and potential efficiency gains using both objective measures and user feedback. The expected results section has been refined to describe the assistant’s capabilities without referring to a “proof-of-concept” and to include practical outputs and scientific insights. Peer-reviewed references have been added alongside existing arXiv and ResearchGate citations to strengthen contextualization. Minor textual refinements, including logical connectors and slightly longer paragraphs, improve readability and alignment across sections, reinforcing the scientific rigor and coherence of the abstract.