- Why local AI matters for file management privacy
- Setting up Microsoft Foundry Local (the default provider)
- Using Ollama as an alternative local provider
- Understanding the translation pipeline
- Cloud provider fallback when you need more power
Why Local AI Matters
FileFortress is built on a simple promise: your file data never leaves your device. AI-powered search should respect that same principle.
When you use a cloud-based AI service to search your files, your search queries — which often contain file names, folder paths, and project names — are transmitted to a remote server. With local AI, the language model runs on your hardware. Your queries stay on your machine. Your search patterns remain private.
- Complete privacy — queries never leave your machine
- No API keys — no accounts to create or billing to manage
- Offline capable — works without internet once models are downloaded
- Data sovereignty — you control where AI processing happens
Getting Started with Microsoft Foundry Local
Microsoft Foundry Local is the default AI provider for FileFortress. It runs small, capable language models directly on your machine through an OpenAI-compatible API endpoint.
Step 1: Install Microsoft Foundry Local
Download and install Microsoft Foundry Local from the official Microsoft documentation. Once installed, the foundry CLI tool will be available on your system.
Step 2: Start the Foundry Service
# Check Foundry status
foundry service status
# The service typically starts automatically
# FileFortress auto-detects the endpoint
Step 3: Configure FileFortress
FileFortress auto-detects Microsoft Foundry Local. You can verify and configure it explicitly:
# Enable and configure Foundry as your AI provider
filefortress tools configure foundry --enable
# Specify a particular model (default is phi-3)
filefortress tools configure foundry --enable --model phi-4-openvino-gpu:1
# If Foundry is installed in a non-standard location
filefortress tools configure foundry --enable --custom-path /path/to/foundry
Foundry model IDs include backend/runtime suffixes. A generic name like phi-4 may fail with 400 errors, while the exact ID phi-4-openvino-gpu:1 works. Check the models available on your Foundry endpoint to get the exact ID.
Step 4: Verify It Works
# Run a test query with dry-run (no search executed)
filefortress ai "find images" --dry-run
# If you see interpreted filters, your setup is working
# If you see errors, check the Troubleshooting section below
Using Ollama as an Alternative
Ollama is another excellent option for running AI models locally. It supports a wide range of open-weight models and provides an OpenAI-compatible API.
Setup with Ollama
# 1. Install Ollama from https://ollama.com
# 2. Pull a model (phi-3 or similar small model recommended)
ollama pull phi3
# 3. Ollama runs on http://localhost:11434 by default
# Configure FileFortress to use it as a provider
# (Provider configuration is done through the OpenAI-compatible provider setup)
Ollama is a great choice if you want to experiment with different models or use models that aren't available through Foundry.
Understanding the Translation Pipeline
When you run filefortress ai "find large videos from Google Drive", here's what happens behind the scenes:
Prompt Construction
FileFortress builds a schema-aware system prompt that includes your configured remote names, available filter fields, and the expected JSON response format.
Local Model Inference
Your query is sent to the local model (running on localhost). The model returns structured JSON with fields like mediaType, sizeMinBytes, remoteName, etc.
Filter Mapping
FileFortress maps the JSON response into the same search filters used by the search command. This ensures AI search and manual search produce identical results.
Search Execution
The mapped filters are executed against your local encrypted file index. Results are returned exactly as if you had typed the explicit search flags yourself.
The AI model receives only your search prompt text and the names of your configured remotes (so it can match remote-specific queries). It never sees your file names, folder structures, file contents, or metadata. The model's job is purely translation — from natural language to JSON filters.
Writing Effective Queries
The AI command works best with filter-oriented prompts. Include concrete hints about what you're looking for:
# Strong prompts — include filter intent
filefortress ai "find images smaller than 5kb"
filefortress ai "find videos on Google Drive modified in last 30 days"
filefortress ai "find documents larger than 10mb on onedrive"
filefortress ai "find archives from S3 older than a year"
# Weak prompts — too vague or not file-search related
filefortress ai "how many remotes do I have?" # Account question, not a search
filefortress ai "find stuff" # Too vague for useful filters
filefortress ai "help me organize my files" # Not a search query
For detailed prompt patterns and strategies, see the AI Query Patterns Guide.
Cloud Provider Fallback
When you need capabilities beyond what local models offer, FileFortress supports cloud-hosted AI providers:
- OpenAI — GPT-4 and later models for complex queries
- Azure OpenAI — enterprise-grade, running in your own Azure tenant
- OpenRouter — access dozens of models through one API
- Any OpenAI-compatible endpoint — self-hosted or custom providers
When using a cloud AI provider, only your search prompt text is sent to the provider. Your file names, metadata, folder structures, and search results are never transmitted. Use --dry-run to preview what the AI interprets before executing any search.
Troubleshooting
Common issues with local AI setup:
| Symptom | Likely Cause | Fix |
|---|---|---|
| 400 error from provider | Model ID mismatch | Use the exact model ID from the endpoint (e.g., phi-4-openvino-gpu:1, not phi-4) |
| Connection refused | Foundry/Ollama not running | Start the service: foundry service status or ollama serve |
| Slow first response | Model loading into memory | Normal for first query. Subsequent queries are faster. |
| No filters extracted | Query is not search-oriented | Rephrase to include filter intent (media type, size, date, remote) |
For a complete troubleshooting workflow, see the AI Troubleshooting Guide.
Related Resources
- AI Command Reference — complete syntax, options, and examples
- AI Query Patterns Guide — write prompts that translate cleanly to filters
- AI Troubleshooting Guide — diagnose provider, model, and interpretation issues
- Your Files, Your AI, Your Machine — why we chose local AI
- From 400 Errors to Reliable Results — real-world Foundry configuration lessons