Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

https://github.com/albertan017/LLM4Decompile converted to gguf for use with https://github.com/ggerganov/llama.cpp

Usage

./main --model ./models/llm4decompile-6.7b-uo-f16.gguf --threads 16 --color -c 2048 -n -1 --repeat-penalty 1.2 -ngl 33 --temp 0.7 -f prompts/llm4decompile.txt

-ngl and --threads values may be lowered to reduce gpu and cpu usage respectively

Prompt Format

# This is the assembly code:
0000000000001139 <main>:
    1139:    push   %rbp
    113a:    mov    %rsp,%rbp
    113d:    sub    $0x10,%rsp
    1141:    mov    %edi,-0x4(%rbp)
    1144:    mov    %rsi,-0x10(%rbp)
    1148:    lea    0xeb5(%rip),%rax
    114f:    mov    %rax,%rdi
    1152:    call   1030 <puts@plt>
    1157:    mov    $0x0,%eax
    115c:    leave
    115d:    ret
# What is the source code?
Downloads last month
11
GGUF
Model size
6.74B params
Architecture
llama

16-bit

Inference API
Unable to determine this model's library. Check the docs .