LoRA: Low-Rank Adaptation of Large Language Models
Paper
•
2106.09685
•
Published
•
57
Bu model, CodeLlama-13b-hf'den fine-tune edilmiş, n8n workflow automation için özelleştirilmiş bir kod üretim modelidir.
Yukarıdaki widget'ta örnek promptları deneyebilirsiniz!
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Base model yükle
base_model = AutoModelForCausalLM.from_pretrained(
"codellama/CodeLlama-13b-hf",
torch_dtype=torch.float16,
device_map="auto"
)
# n8n fine-tuned adapter ekle
model = PeftModel.from_pretrained(base_model, "AlpYzc/code-llama-13b-turkish-quick-fix")
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("AlpYzc/code-llama-13b-turkish-quick-fix")
# n8n workflow üret
prompt = "Create an n8n workflow that triggers when a webhook receives data:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_new_tokens=150, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
| Model | n8n Terms | Workflow Focus | JSON Structure |
|---|---|---|---|
| Original CodeLlama | ⭐⭐ | ⭐⭐ | ⭐⭐ |
| n8n Fine-tuned | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
Original CodeLlama:
Create a n8n webhook workflow:
1. Add a webhook node
2. Create a webhook url
3. Update the n8n workflow with the webhook url
n8n Fine-tuned:
Create a webhook in n8n:
1. Create a new workflow.
2. Add a webhook node.
3. Copy the URL from the Webhook node to the clipboard.
4. Paste the URL into the N8N_WEBHOOK_URL field in the .env file.
codellama/CodeLlama-13b-hfprompt = "Create n8n workflow for monitoring file changes:"
# Generates complete n8n workflow with proper nodes
prompt = '{"name": "HTTP Request", "type": "n8n-nodes-base.httpRequest",'
# Generates valid n8n node JSON configuration
prompt = "n8n automation: CSV processing and Slack notification:"
# Generates multi-step automation workflows
transformers, peft, torch@misc{code-llama-n8n-2025,
title={Code Llama 13B n8n Workflow Generator},
author={AlpYzc},
year={2025},
url={https://huggingface.co/AlpYzc/code-llama-13b-turkish-quick-fix}
}
Bu model n8n community için geliştirilmiştir. Feedback ve improvement önerileri memnuniyetle karşılanır!
Base model
codellama/CodeLlama-13b-hf