Common Issues
Step-by-step solutions for the most common Dropstone Desktop and Agent issues
This guide focuses on the two most common issues users face: downloading and installing Dropstone Desktop on macOS and using unlimited (open-source) models. These detailed walkthroughs will help you resolve issues quickly.
macOS Installation Issues (Apple Silicon)
Issue 1: Homebrew Installation Fails
Problem: When running brew tap blankline-org/dropstone
or brew install --cask dropstone
, the command fails with various errors.
Solution 1: Homebrew Not Installed
Symptoms:
brew: command not found
Fix:
-
Install Homebrew first:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
-
After installation completes, add Homebrew to your PATH:
For M1/M2/M3/M4 Macs (Apple Silicon):
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile eval "$(/opt/homebrew/bin/brew shellenv)"
-
Verify Homebrew is working:
brew --version
You should see:
Homebrew 4.x.x
or similar -
Now retry Dropstone installation:
brew tap blankline-org/dropstone brew install --cask dropstone
Solution 2: "Tap Not Found" Error
Symptoms:
Error: Invalid tap name: blankline-org/dropstone
# or
Error: No available formula or cask with the name "dropstone"
Fix:
-
Check your internet connection:
ping -c 3 github.com
If this fails, fix your network connection first.
-
Update Homebrew:
brew update
Wait for update to complete (may take 1-5 minutes).
-
Clear Homebrew cache:
rm -rf $(brew --cache)
-
Try adding tap again:
brew tap blankline-org/dropstone
-
Verify tap was added:
brew tap | grep dropstone
You should see:
blankline-org/dropstone
-
Install Dropstone:
brew install --cask dropstone
Solution 3: "Cask Install Failed" Error
Symptoms:
Error: Cask 'dropstone' is unavailable
# or
Error: Download failed
Fix:
-
Check if tap is properly added:
brew tap
If
blankline-org/dropstone
is missing, add it:brew tap blankline-org/dropstone
-
Try direct installation with full path:
brew install --cask blankline-org/dropstone/dropstone
-
If still failing, check Homebrew permissions:
sudo chown -R $(whoami) /opt/homebrew
-
Retry installation:
brew install --cask dropstone --force
Issue 2: macOS Blocks Dropstone from Opening
Problem: After installation, when you try to open Dropstone Desktop, macOS shows "Dropstone can't be opened because Apple cannot check it for malicious software" or similar security warning.
Solution: Allow Dropstone in System Settings
Step-by-Step Fix:
-
Try opening Dropstone (it will fail, but this triggers the security prompt)
- Open Launchpad or Applications folder
- Click on Dropstone
- macOS will show security warning
-
Open System Settings:
- Click (Apple menu) → System Settings
- Or use Spotlight: Press
Cmd + Space
, type "System Settings", press Enter
-
Navigate to Security:
- Click Privacy & Security in the left sidebar
- Scroll down to the Security section
-
Allow Dropstone:
- You should see a message: "Dropstone was blocked from use because it is not from an identified developer"
- Click Open Anyway button next to this message
- Enter your Mac password if prompted
-
Try opening Dropstone again:
- Go back to Applications or Launchpad
- Click Dropstone
- Another prompt may appear: Click Open
-
Dropstone should now launch successfully
Alternative Method (Right-Click to Open):
- Open Finder
- Go to Applications folder
- Find Dropstone app
- Right-click (or Control + Click) on Dropstone
- Select Open from the menu
- Click Open in the confirmation dialog
- Dropstone will now open (and will open normally from now on)
If Above Methods Don't Work:
Try removing the quarantine attribute:
sudo xattr -cr /Applications/Dropstone.app
Then try opening Dropstone again.
Issue 3: "Rosetta 2 Required" Error
Problem: macOS shows "To open Dropstone, you need to install Rosetta" or the app crashes immediately on M1/M2/M3/M4 Macs.
Solution: Install Rosetta 2
What is Rosetta 2? Rosetta 2 allows Intel-based apps to run on Apple Silicon Macs. Some components of Dropstone may require it.
Install Rosetta 2:
-
Automatic installation (recommended):
softwareupdate --install-rosetta --agree-to-license
This will:
- Download Rosetta 2
- Install it automatically
- Accept the license agreement
- Take 1-3 minutes depending on internet speed
-
Verify installation:
/usr/bin/pgrep -q oahd && echo "Rosetta 2 is installed" || echo "Rosetta 2 is not installed"
You should see:
Rosetta 2 is installed
-
Launch Dropstone again:
- Open from Applications or Launchpad
- Should now work without Rosetta errors
Issue 4: Upgrade Issues (Updating Existing Dropstone)
Problem: You have an old version of Dropstone and want to upgrade, but brew upgrade
doesn't work or causes errors.
Solution: Clean Upgrade Process
Step-by-Step Upgrade:
-
Check current version:
brew list --cask dropstone
-
Update Homebrew first:
brew update
-
Upgrade Dropstone:
brew upgrade --cask dropstone
If upgrade fails:
-
Uninstall old version:
brew uninstall --cask dropstone
-
Clear cache:
rm -rf $(brew --cache)/downloads/*dropstone*
-
Reinstall latest version:
brew install --cask dropstone
-
Verify installation:
ls -la /Applications/Dropstone.app
Issue 5: "Permission Denied" During Installation
Problem: Installation fails with permission errors.
Solution: Fix Homebrew Permissions
# Fix Homebrew directory ownership
sudo chown -R $(whoami) /opt/homebrew
# Fix Caskroom permissions
sudo chown -R $(whoami) /opt/homebrew/Caskroom
# Retry installation
brew install --cask dropstone
Unlimited Models (Open-Source) Issues
Understanding Unlimited Models
What are Unlimited Models?
- Open-source AI models that run locally on your machine
- Examples: Qwen3 32B, GPT-OSS 120B, Qwen2.5-Coder 8B
- "Unlimited" means no token usage costs once installed
- Require sufficient RAM, disk space, and optionally GPU
Issue 6: Can't Find Unlimited Models Option
Problem: You're trying to use unlimited models but can't find where to access them in Dropstone Agent.
Solution: Access Unlimited Models
Step 1: Open Dropstone Agent Settings
- Launch Dropstone Desktop
- Click the Dropstone Agent icon in the sidebar
- Click the three-dot menu (⋯) at the top right
- Select Settings (gear icon)
Step 2: Navigate to API Configuration
- In Settings, click API Configuration section
- Look for Provider Selection dropdown
- Select Ollama as the provider
- Ollama is the system that runs unlimited/open-source models locally
Step 3: Install Ollama (if not installed)
If you see "Ollama not detected" or similar message:
-
Install Ollama on macOS:
brew install ollama
-
Install Ollama on Windows:
- Download from: ollama.ai/download
- Run the installer
- Follow installation wizard
-
Verify Ollama is running:
ollama --version
Should show:
ollama version 0.x.x
or similar -
Start Ollama service:
ollama serve
Leave this terminal window open (Ollama runs in background)
Step 4: Download Models
-
Open new terminal window
-
Pull your desired model:
For lightweight coding (8GB RAM minimum):
ollama pull qwen2.5-coder:7b
For better performance (16GB RAM recommended):
ollama pull qwen2.5-coder:14b
For best quality (32GB RAM required):
ollama pull qwen2.5:32b
-
Wait for download to complete (can take 5-30 minutes depending on model size and internet speed)
-
Verify model is available:
ollama list
You should see your downloaded model listed
Step 5: Select Model in Dropstone Agent
- Return to Dropstone Agent Settings
- In API Configuration → Ollama section
- Click Model Selection dropdown
- Select your downloaded model from the list
- Click Save or Apply
Step 6: Test Unlimited Model
- Go back to Dropstone Agent chat
- Type a simple request:
Write a hello world function in Python
- The agent should respond using your local Ollama model
- No token cost - completely free and unlimited!
Issue 7: Unlimited Model Not Responding
Problem: You've installed Ollama and downloaded a model, but Dropstone Agent shows "Model not responding" or timeout errors.
Solution: Ensure Ollama is Running
Step 1: Check if Ollama service is running
# Check Ollama process
ps aux | grep ollama
# Or check with Ollama command
ollama list
If Ollama is not running:
-
Start Ollama service:
ollama serve
Keep this terminal window open, or run in background:
# macOS/Linux: Run in background nohup ollama serve > /dev/null 2>&1 & # Windows: Install as service (run as Administrator) ollama serve --install
-
Verify service is running:
curl http://localhost:11434
Should return:
Ollama is running
Step 2: Check Ollama connection in Dropstone
- Open Dropstone Agent Settings
- Go to API Configuration → Ollama
- Check Ollama URL field
- Should be:
http://localhost:11434
(default) - Click Test Connection button
- Should show: ✅ "Connected successfully"
If connection fails:
-
Try alternative port:
- Change URL to:
http://127.0.0.1:11434
- Click Test Connection again
- Change URL to:
-
Check firewall isn't blocking port 11434:
# macOS: Check port lsof -i :11434 # Windows: Check port netstat -ano | findstr :11434
-
Restart Ollama service:
# Stop Ollama pkill ollama # Start again ollama serve
Issue 8: Unlimited Model is Too Slow
Problem: Unlimited model responds but is extremely slow (takes minutes for simple responses).
Solution: Optimize Model Performance
Cause: Model is running on CPU instead of GPU, or insufficient RAM.
Step 1: Check System Resources
-
Check available RAM:
# macOS vm_stat | head -10 # Windows systeminfo | findstr "Available Physical Memory"
-
Recommended RAM by model:
- 7B models: 8GB minimum, 16GB recommended
- 14B models: 16GB minimum, 32GB recommended
- 32B models: 32GB minimum, 64GB recommended
If you have insufficient RAM:
- Use a smaller model (e.g.,
qwen2.5-coder:7b
instead of:32b
) - Close other applications to free up memory
Step 2: Enable GPU Acceleration (if you have compatible GPU)
For NVIDIA GPU (Windows/Linux):
-
Check if CUDA is detected:
nvidia-smi
-
If CUDA is installed, Ollama will use GPU automatically
- Check Ollama logs to confirm:
ollama run qwen2.5-coder:7b
- Should see:
loaded model on GPU
in output
-
Force GPU usage: Set environment variable:
# Windows PowerShell $env:OLLAMA_GPU=1 ollama serve # macOS/Linux export OLLAMA_GPU=1 ollama serve
For Apple Silicon (M1/M2/M3/M4):
- Metal GPU acceleration is enabled automatically
- No configuration needed
- Check Activity Monitor → GPU tab to verify GPU usage
Step 3: Use Quantized Models (Faster, Less Accurate)
Quantized models are smaller and faster but slightly less accurate:
# Stop current model
pkill ollama
# Pull quantized version (uses 4-bit quantization)
ollama pull qwen2.5-coder:7b-q4_0
# Or even more compressed
ollama pull qwen2.5-coder:7b-q4_K_M
Then select the quantized model in Dropstone Agent settings.
Step 4: Adjust Context Window
Smaller context = faster responses:
- Open Dropstone Agent Settings
- Go to Context Management
- Set Context Window Size to "Small" (8,000 tokens)
- Reduce File Limit for Context to 10-20 files
- Save settings
Issue 9: Unlimited Model Doesn't Understand My Project
Problem: The unlimited model gives generic responses and doesn't seem to understand project-specific code patterns.
Solution: Give the Model More Context
Unlike cloud models, unlimited (Ollama) models have NO pre-loaded knowledge of your project. You must explicitly provide context.
Step 1: Use Context Mentions
Always include relevant files in your prompts:
@src/main.ts @src/utils/helper.ts
Add error handling to the processData function following the pattern used in this project
Step 2: Provide Examples
Show the model your coding style:
@src/services/userService.ts
Create a new productService.ts file following the exact same pattern as userService.ts
Step 3: Increase Context Window (if you have RAM)
- Open Dropstone Agent Settings
- Go to Context Management
- Set Context Window Size to "Large" (128,000 tokens)
- Increase File Limit for Context to 50+
- Save settings
Step 4: Let It Learn
Unlimited models in Dropstone Agent have learning capabilities:
- Use the agent consistently for 1-2 weeks
- Correct it when responses are wrong
- The Learning Engine will extract patterns
- Agent will gradually understand your project conventions
Issue 10: Downloaded Model Disappeared
Problem: You downloaded a model yesterday, but today it's not showing in Dropstone Agent.
Solution: Verify Ollama Models
Step 1: Check if model exists:
ollama list
If model is missing from list:
-
It may have been deleted - re-download:
ollama pull qwen2.5-coder:7b
-
Check Ollama storage location:
# macOS/Linux ls -la ~/.ollama/models # Windows dir %USERPROFILE%\.ollama\models
If model exists but doesn't show in Dropstone:
-
Restart Ollama service:
pkill ollama ollama serve
-
Restart Dropstone Desktop:
- Quit Dropstone completely (Cmd+Q on Mac, Alt+F4 on Windows)
- Reopen Dropstone Desktop
- Check Dropstone Agent settings again
-
Manually refresh model list in Dropstone:
- Open Settings → API Configuration → Ollama
- Click Refresh Models button (if available)
- Or toggle provider to "Dropstone" then back to "Ollama"
Issue 11: "Out of Memory" Error with Unlimited Models
Problem: Ollama crashes with "out of memory" error, or system becomes unresponsive.
Solution: Reduce Model Size or Memory Usage
Immediate Fix:
-
Use smaller model:
# Stop current model pkill ollama # Switch to smaller model ollama pull qwen2.5-coder:3b
-
Update Dropstone Agent to use smaller model:
- Settings → API Configuration → Ollama
- Select
qwen2.5-coder:3b
- Save
Long-term Solutions:
Option 1: Increase System RAM
- Unlimited models need significant RAM
- 7B models: 16GB RAM recommended
- 32B models: 64GB RAM required
Option 2: Configure Swap (Use Disk as RAM - Slower)
⚠️ Warning: This makes models MUCH slower but prevents crashes
# macOS: Increase swap space
# This is automatic on macOS, but ensure you have 20GB+ free disk space
# Linux: Create swap file
sudo fallocate -l 16G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Windows: Increase virtual memory
# Control Panel → System → Advanced → Performance Settings → Advanced → Virtual Memory
# Set to 16GB+ (16384 MB)
Option 3: Use Cloud Models Instead
If your system can't handle unlimited models:
- Open Dropstone Agent Settings
- Switch provider to Dropstone or AWS Bedrock
- These run on cloud servers (cost per use, but no local resources needed)
Issue 12: How to Switch Between Unlimited and Cloud Models
Problem: You want to use unlimited models sometimes (free) and cloud models other times (faster/better).
Solution: Quick Provider Switching
Method 1: Settings Panel (Persistent Change)
- Open Dropstone Agent Settings
- Go to API Configuration
- Select provider:
- Ollama = Unlimited models (local, free)
- Dropstone = Cloud models (fast, costs tokens)
- AWS Bedrock = Cloud models (enterprise)
- Click Save
- All future requests use selected provider
Method 2: Context Mention (Temporary Override)
Use @mode
to temporarily switch:
@mode ollama
Write a Python function to parse JSON
# Uses local Ollama model (unlimited)
@mode dropstone
Analyze this complex codebase and suggest architecture improvements
# Uses Dropstone cloud model (faster, better for complex tasks)
Best Practice:
- Use Ollama (unlimited) for: Simple tasks, learning, experimentation, privacy-sensitive code
- Use Dropstone/Cloud for: Complex analysis, critical features, time-sensitive tasks
Still Having Issues?
Collect Diagnostic Information
Before contacting support, gather this info:
For macOS Issues:
# System info
sw_vers
system_profiler SPHardwareDataType | grep "Model\|Memory\|Chip"
# Homebrew info
brew --version
brew doctor
# Dropstone info (if installed)
ls -la /Applications/Dropstone.app
For Unlimited Models Issues:
# Ollama info
ollama --version
ollama list
ollama ps
# System resources
# macOS
vm_stat
# Windows
systeminfo | findstr "Memory"
Get Help
-
Community Support:
- Discord Community - Fastest response
- GitHub Discussions
- Include: OS version, error messages, steps you tried
-
Documentation:
- macOS Setup Guide - Complete installation walkthrough
- Settings Guide - Configure unlimited models
-
Direct Support:
- Email: support@dropstone.io
- Include diagnostic information from above
- Attach screenshots of errors
Quick Reference
macOS Installation Commands
# Install Homebrew (if needed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Add to PATH (Apple Silicon)
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"
# Install Dropstone
brew tap blankline-org/dropstone
brew install --cask dropstone
# Allow in macOS Security (if blocked)
sudo xattr -cr /Applications/Dropstone.app
# Install Rosetta 2 (if needed)
softwareupdate --install-rosetta --agree-to-license
# Upgrade existing Dropstone
brew upgrade --cask dropstone
Unlimited Models Quick Start
# Install Ollama
brew install ollama # macOS
# or download from ollama.ai for Windows
# Start Ollama service
ollama serve
# Download model (choose based on your RAM)
ollama pull qwen2.5-coder:7b # 8GB RAM minimum
ollama pull qwen2.5-coder:14b # 16GB RAM recommended
ollama pull qwen2.5:32b # 32GB RAM required
# Verify
ollama list
# Configure in Dropstone Agent
# Settings → API Configuration → Select "Ollama" → Choose your model
These solutions cover 95% of common issues. If your problem isn't listed here, check the Troubleshooting Guide or contact support.