CodeLlama-13B-QML Released on Hugging Face
January 31, 2025 by Peter Schneider | Comments
We have released CodeLlama-13B-QML Large Language Model for QML code completion on Hugging Face.
What is CodeLlama-13B-QML?
CodeLlama-13B-QML is an LLM designed for writing Qt 6-compliant code in the Fill-In-the-Middle method. It is built on Meta’s CodeLlama-13B base model, which we fine-tuned using over 4000 QML code snippets.
The CodeLlama-13B-QML model targets companies and individuals that want to self-host their LLM for UI development in a private cloud instead of relying on third-party hosted LLMs.
The best artificial QML coder, according to the QML100FIM benchmark
CodeLlama-13B-QML scores 79% on the QML100 Fill-In-the-Middle code completion benchmark for Qt 6 release-compliant code (see the related blog post here). In comparison, Claude 3.5 Sonnet scored 68%, CodeLlama-13B base scored 66%, and GPT-4o scored 62%.
What does fine-tuning the base model mean?
We used the LoRA method, the optimization of the LLM’s neural network with additional training data, as a cost- and energy-efficient means to add more knowledge to the CodeLlama-13B base model. All code snippets have been manually verified in Qt Creator 15 against the Qt 6.8 release. All of them pass the QML linter.
We started putting these code snippets together as early as November 2023 for our first PoC for code generation. At some point, we had three full-time, experienced QML developers with plenty of customer project experience creating these. Some of the code snippets are based on the official doc.qt.io examples.
The scope of the fine-tuning data sets focuses on Qt Quick Controls and common QML components, including the following QML libraries: QtCore, QtQuick, QtQuick.Controls, QtQuick.Dialogs, QtQuick.Layouts, QtQuick.Effects, QtMultimedia, QtQuick.Shapes, and QtGraphs.
How to use CodeLlama-13B-QML?
Companies and individuals can download the fine-tuned CodeLlama-13B-QML model from Hugging Face here. You do not need a commercial Qt license to use the CodeLlama-13B-QML model. For example, you can use it directly with a Command Line Interface (CLI) prompt with Ollama.
CodeLlama-13B-QML is a medium-sized Language Model that requires significant computing resources to perform with inference (response) times suitable for automatic code completion. Therefore, it should be used with a GPU accelerator, either in the cloud environment such as AWS, Google Cloud, Microsoft Azure, or locally. Depending on your personal computer’s hardware, CodeLlama-13B-QML can be deployed locally using Ollama technology. However, without powerful GPU that can be used by Ollama, the inference times might be disappointing for daily use on a computer.
If customers would like assistance for deploying LLMs in a private cloud, then customers can inquire through Qt's Professional Services how to run CodeLlama-13B-QML in a private cloud deployment.
Large Language Models, including CodeLlama-13B-QML, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building AI systems.
CodeLlama-13B is a model of the Llama 2 family. Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
Need more info?
If you want to know more about what the Qt AI Assistant can do for you, please visit our product pages.
If you need instructions on how to get started, please refer to our documentation.
Blog Topics:
Comments
Subscribe to our newsletter
Subscribe Newsletter
Try Qt 6.8 Now!
Download the latest release here: www.qt.io/download.
Qt 6.8 release focuses on technology trends like spatial computing & XR, complex data visualization in 2D & 3D, and ARM-based development for desktop.
We're Hiring
Check out all our open positions here and follow us on Instagram to see what it's like to be #QtPeople.