Efficient Triton Kernels for LLM Training
#大语言模型#Explore LLM model deployment based on AXera's AI chips
Gemma2(9B), Llama3-8B-Finetune-and-RAG, code base for sample, implemented in Kaggle platform
Serverless AI Inference with Gemma 2 using Mozilla's llamafile on AWS Lambda
Finetuning of Gemma-2 2B for structured output
#大语言模型#Craft fortunes using Ollama
#大语言模型#Gemma2 2B model that fine tuned with an e-commerce data.
Программа для поиска Telegram-групп и каналов с использованием GPT 🔍. Позволяет искать сообщества по ключевым словам 🔑. A program for searching Telegram groups and channels using GPT 🔍. Allows you...
#自然语言处理#A complete guide to NLP and ML for text processing, covering rule-based models, RNNs, CNNs, Transformers, entity detection, sentiment analysis, LLM fine-tuning, RAG, and prompt engineering with tools ...
#大语言模型#AI Discord Bot (GEMM-X) is an intelligent assistant for Discord, leveraging AI technologies from multiple providers to generate images, create music, produce speech, and more. It supports custom perso...
This project focuses on efficient machine translation for nine Indic languages using the fine-tuned Gemma2-2B LLM and adapter switching, reducing computational overhead. It also leverages agentic meth...
Code for Paper Submitted in GRADES-NDA 2025