Meta Launched LLMCompiler for Code Optimization

TapTechNews June 30th news, Meta launched a model named LLMCompiler the day before yesterday. This model is built based on Meta's existing CodeLlama and focuses on code optimization. Currently, the relevant models have landed on HuggingFace, providing two versions of 7 billion parameters and 13 billion parameters, allowing for academic and commercial use. TapTechNews attaches the project address as follows: click here to visit.

Meta Launched LLMCompiler for Code Optimization_0

Meta believes that although major language models in the industry have demonstrated excellent capabilities in various programming code tasks, there is still room for improvement in code optimization for such models. The currently launched LLMCompiler model is a pre-trained model specifically designed for optimizing code tasks and can simulate the compiler to optimize code or convert the already optimized code back to the original language.

TapTechNews learned that LLMCompiler was trained on a huge corpus of 546 billion LLVM-IR and assembly code tokens and is said to be able to reach 77% of the code optimization potential. Developers can freely use the relevant models in combination with other AI models to improve the quality of the generated code.

Likes