NEW! AutoTune™ From Armilla AI
AutoTune™ enables robust Generative AI solutions with automated alignment
Harden your Generative AI solutions with advanced testing and fine-tuning
to increase performance and reduce bias, hallucinations and safety issues.
Benefits of AutoTune
-
Immediate Value
A wide range of out-of-the-box alignment controls available, with the ability to customize or create new controls. -
High Performance
Advanced fine-tuning, enabling greater robustness than traditional approaches. -
No Data Required
Concept-based approach requires minimal data, and minimal human-in-the-loop contact. Auto generates targeted synthetic data. -
Automated Testing
Each alignment scenario can be tested and assessed independently. Detect bias, security holes, tonality issues, hallucinations easily. -
Supports Major Base Models
Including OpenAI LLMs like GPT, Stable Diffusion image generation models, open source models, with Bard and others coming soon.
STEP 1
Define:
Use out-of-the-box alignment controls or define your own performance expectations using concepts (not datasets), enabling very broad alignment to your goals.
Example use cases:
Currently Supported Alignment Controls
Class | Functionality | Description | Control Type |
---|---|---|---|
Security | Privacy filtering | Redacts PII and sensitive information, preventing leakage to / from the base model | Guardrail |
Fact checking | Provides a framework for detecting hallucinations and validating information returned from the model | Guardrail | |
Jailbreaks | Provides robustness testing, fine-tuning and prevention of common hacking attempts | Hybrid | |
Toxicity | Provides detection and interception of toxic inputs & outputs | Guardrail | |
Bias | Stereotypical tuning | Change stereotypical conventions and mitigate biases from training data that has been incorporated into the base model | Fine-tuning |
Ethical filtering rules | Provide and fine-tune with additional context to ensure context-appropiate responses | Hybrid | |
Custom | Custom datasets | Allows custom datasets to be used for answering and guardrails around its answers | Hybrid |
Transfer learning and custom classification | Allows development of custom training of default classifications as well as smaller / more secure models | Fine-tuning | |
Prompt optimization | Optimizes responses via generated prompt engineering | Guardrail | |
Tonality | Provides fine-tuning for adjusting the tonality and behavior of the model | Fine-tuning |