Common Issues

Step-by-step solutions for the most common Dropstone Desktop and Agent issues

This guide focuses on the two most common issues users face: downloading and installing Dropstone Desktop on macOS and using unlimited (open-source) models. These detailed walkthroughs will help you resolve issues quickly.


macOS Installation Issues (Apple Silicon)

Issue 1: Homebrew Installation Fails

Problem: When running brew tap blankline-org/dropstone or brew install --cask dropstone, the command fails with various errors.

Solution 1: Homebrew Not Installed

Symptoms:

brew: command not found

Fix:

  1. Install Homebrew first:

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    
  2. After installation completes, add Homebrew to your PATH:

    For M1/M2/M3/M4 Macs (Apple Silicon):

    echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
    eval "$(/opt/homebrew/bin/brew shellenv)"
    
  3. Verify Homebrew is working:

    brew --version
    

    You should see: Homebrew 4.x.x or similar

  4. Now retry Dropstone installation:

    brew tap blankline-org/dropstone
    brew install --cask dropstone
    

Solution 2: "Tap Not Found" Error

Symptoms:

Error: Invalid tap name: blankline-org/dropstone
# or
Error: No available formula or cask with the name "dropstone"

Fix:

  1. Check your internet connection:

    ping -c 3 github.com
    

    If this fails, fix your network connection first.

  2. Update Homebrew:

    brew update
    

    Wait for update to complete (may take 1-5 minutes).

  3. Clear Homebrew cache:

    rm -rf $(brew --cache)
    
  4. Try adding tap again:

    brew tap blankline-org/dropstone
    
  5. Verify tap was added:

    brew tap | grep dropstone
    

    You should see: blankline-org/dropstone

  6. Install Dropstone:

    brew install --cask dropstone
    

Solution 3: "Cask Install Failed" Error

Symptoms:

Error: Cask 'dropstone' is unavailable
# or
Error: Download failed

Fix:

  1. Check if tap is properly added:

    brew tap
    

    If blankline-org/dropstone is missing, add it:

    brew tap blankline-org/dropstone
    
  2. Try direct installation with full path:

    brew install --cask blankline-org/dropstone/dropstone
    
  3. If still failing, check Homebrew permissions:

    sudo chown -R $(whoami) /opt/homebrew
    
  4. Retry installation:

    brew install --cask dropstone --force
    

Issue 2: macOS Blocks Dropstone from Opening

Problem: After installation, when you try to open Dropstone Desktop, macOS shows "Dropstone can't be opened because Apple cannot check it for malicious software" or similar security warning.

Solution: Allow Dropstone in System Settings

Step-by-Step Fix:

  1. Try opening Dropstone (it will fail, but this triggers the security prompt)

    • Open Launchpad or Applications folder
    • Click on Dropstone
    • macOS will show security warning
  2. Open System Settings:

    • Click (Apple menu) → System Settings
    • Or use Spotlight: Press Cmd + Space, type "System Settings", press Enter
  3. Navigate to Security:

    • Click Privacy & Security in the left sidebar
    • Scroll down to the Security section
  4. Allow Dropstone:

    • You should see a message: "Dropstone was blocked from use because it is not from an identified developer"
    • Click Open Anyway button next to this message
    • Enter your Mac password if prompted
  5. Try opening Dropstone again:

    • Go back to Applications or Launchpad
    • Click Dropstone
    • Another prompt may appear: Click Open
  6. Dropstone should now launch successfully


Alternative Method (Right-Click to Open):

  1. Open Finder
  2. Go to Applications folder
  3. Find Dropstone app
  4. Right-click (or Control + Click) on Dropstone
  5. Select Open from the menu
  6. Click Open in the confirmation dialog
  7. Dropstone will now open (and will open normally from now on)

If Above Methods Don't Work:

Try removing the quarantine attribute:

sudo xattr -cr /Applications/Dropstone.app

Then try opening Dropstone again.


Issue 3: "Rosetta 2 Required" Error

Problem: macOS shows "To open Dropstone, you need to install Rosetta" or the app crashes immediately on M1/M2/M3/M4 Macs.

Solution: Install Rosetta 2

What is Rosetta 2? Rosetta 2 allows Intel-based apps to run on Apple Silicon Macs. Some components of Dropstone may require it.

Install Rosetta 2:

  1. Automatic installation (recommended):

    softwareupdate --install-rosetta --agree-to-license
    

    This will:

    • Download Rosetta 2
    • Install it automatically
    • Accept the license agreement
    • Take 1-3 minutes depending on internet speed
  2. Verify installation:

    /usr/bin/pgrep -q oahd && echo "Rosetta 2 is installed" || echo "Rosetta 2 is not installed"
    

    You should see: Rosetta 2 is installed

  3. Launch Dropstone again:

    • Open from Applications or Launchpad
    • Should now work without Rosetta errors

Issue 4: Upgrade Issues (Updating Existing Dropstone)

Problem: You have an old version of Dropstone and want to upgrade, but brew upgrade doesn't work or causes errors.

Solution: Clean Upgrade Process

Step-by-Step Upgrade:

  1. Check current version:

    brew list --cask dropstone
    
  2. Update Homebrew first:

    brew update
    
  3. Upgrade Dropstone:

    brew upgrade --cask dropstone
    

If upgrade fails:

  1. Uninstall old version:

    brew uninstall --cask dropstone
    
  2. Clear cache:

    rm -rf $(brew --cache)/downloads/*dropstone*
    
  3. Reinstall latest version:

    brew install --cask dropstone
    
  4. Verify installation:

    ls -la /Applications/Dropstone.app
    

Issue 5: "Permission Denied" During Installation

Problem: Installation fails with permission errors.

Solution: Fix Homebrew Permissions

# Fix Homebrew directory ownership
sudo chown -R $(whoami) /opt/homebrew

# Fix Caskroom permissions
sudo chown -R $(whoami) /opt/homebrew/Caskroom

# Retry installation
brew install --cask dropstone

Unlimited Models (Open-Source) Issues

Understanding Unlimited Models

What are Unlimited Models?

  • Open-source AI models that run locally on your machine
  • Examples: Qwen3 32B, GPT-OSS 120B, Qwen2.5-Coder 8B
  • "Unlimited" means no token usage costs once installed
  • Require sufficient RAM, disk space, and optionally GPU

Issue 6: Can't Find Unlimited Models Option

Problem: You're trying to use unlimited models but can't find where to access them in Dropstone Agent.

Solution: Access Unlimited Models

Step 1: Open Dropstone Agent Settings

  1. Launch Dropstone Desktop
  2. Click the Dropstone Agent icon in the sidebar
  3. Click the three-dot menu (⋯) at the top right
  4. Select Settings (gear icon)

Step 2: Navigate to API Configuration

  1. In Settings, click API Configuration section
  2. Look for Provider Selection dropdown
  3. Select Ollama as the provider
    • Ollama is the system that runs unlimited/open-source models locally

Step 3: Install Ollama (if not installed)

If you see "Ollama not detected" or similar message:

  1. Install Ollama on macOS:

    brew install ollama
    
  2. Install Ollama on Windows:

  3. Verify Ollama is running:

    ollama --version
    

    Should show: ollama version 0.x.x or similar

  4. Start Ollama service:

    ollama serve
    

    Leave this terminal window open (Ollama runs in background)

Step 4: Download Models

  1. Open new terminal window

  2. Pull your desired model:

    For lightweight coding (8GB RAM minimum):

    ollama pull qwen2.5-coder:7b
    

    For better performance (16GB RAM recommended):

    ollama pull qwen2.5-coder:14b
    

    For best quality (32GB RAM required):

    ollama pull qwen2.5:32b
    
  3. Wait for download to complete (can take 5-30 minutes depending on model size and internet speed)

  4. Verify model is available:

    ollama list
    

    You should see your downloaded model listed

Step 5: Select Model in Dropstone Agent

  1. Return to Dropstone Agent Settings
  2. In API ConfigurationOllama section
  3. Click Model Selection dropdown
  4. Select your downloaded model from the list
  5. Click Save or Apply

Step 6: Test Unlimited Model

  1. Go back to Dropstone Agent chat
  2. Type a simple request: Write a hello world function in Python
  3. The agent should respond using your local Ollama model
  4. No token cost - completely free and unlimited!

Issue 7: Unlimited Model Not Responding

Problem: You've installed Ollama and downloaded a model, but Dropstone Agent shows "Model not responding" or timeout errors.

Solution: Ensure Ollama is Running

Step 1: Check if Ollama service is running

# Check Ollama process
ps aux | grep ollama

# Or check with Ollama command
ollama list

If Ollama is not running:

  1. Start Ollama service:

    ollama serve
    

    Keep this terminal window open, or run in background:

    # macOS/Linux: Run in background
    nohup ollama serve > /dev/null 2>&1 &
    
    # Windows: Install as service (run as Administrator)
    ollama serve --install
    
  2. Verify service is running:

    curl http://localhost:11434
    

    Should return: Ollama is running

Step 2: Check Ollama connection in Dropstone

  1. Open Dropstone Agent Settings
  2. Go to API ConfigurationOllama
  3. Check Ollama URL field
  4. Should be: http://localhost:11434 (default)
  5. Click Test Connection button
  6. Should show: ✅ "Connected successfully"

If connection fails:

  1. Try alternative port:

    • Change URL to: http://127.0.0.1:11434
    • Click Test Connection again
  2. Check firewall isn't blocking port 11434:

    # macOS: Check port
    lsof -i :11434
    
    # Windows: Check port
    netstat -ano | findstr :11434
    
  3. Restart Ollama service:

    # Stop Ollama
    pkill ollama
    
    # Start again
    ollama serve
    

Issue 8: Unlimited Model is Too Slow

Problem: Unlimited model responds but is extremely slow (takes minutes for simple responses).

Solution: Optimize Model Performance

Cause: Model is running on CPU instead of GPU, or insufficient RAM.

Step 1: Check System Resources

  1. Check available RAM:

    # macOS
    vm_stat | head -10
    
    # Windows
    systeminfo | findstr "Available Physical Memory"
    
  2. Recommended RAM by model:

    • 7B models: 8GB minimum, 16GB recommended
    • 14B models: 16GB minimum, 32GB recommended
    • 32B models: 32GB minimum, 64GB recommended

If you have insufficient RAM:

  • Use a smaller model (e.g., qwen2.5-coder:7b instead of :32b)
  • Close other applications to free up memory

Step 2: Enable GPU Acceleration (if you have compatible GPU)

For NVIDIA GPU (Windows/Linux):

  1. Check if CUDA is detected:

    nvidia-smi
    
  2. If CUDA is installed, Ollama will use GPU automatically

    • Check Ollama logs to confirm:
    ollama run qwen2.5-coder:7b
    
    • Should see: loaded model on GPU in output
  3. Force GPU usage: Set environment variable:

    # Windows PowerShell
    $env:OLLAMA_GPU=1
    ollama serve
    
    # macOS/Linux
    export OLLAMA_GPU=1
    ollama serve
    

For Apple Silicon (M1/M2/M3/M4):

  • Metal GPU acceleration is enabled automatically
  • No configuration needed
  • Check Activity Monitor → GPU tab to verify GPU usage

Step 3: Use Quantized Models (Faster, Less Accurate)

Quantized models are smaller and faster but slightly less accurate:

# Stop current model
pkill ollama

# Pull quantized version (uses 4-bit quantization)
ollama pull qwen2.5-coder:7b-q4_0

# Or even more compressed
ollama pull qwen2.5-coder:7b-q4_K_M

Then select the quantized model in Dropstone Agent settings.

Step 4: Adjust Context Window

Smaller context = faster responses:

  1. Open Dropstone Agent Settings
  2. Go to Context Management
  3. Set Context Window Size to "Small" (8,000 tokens)
  4. Reduce File Limit for Context to 10-20 files
  5. Save settings

Issue 9: Unlimited Model Doesn't Understand My Project

Problem: The unlimited model gives generic responses and doesn't seem to understand project-specific code patterns.

Solution: Give the Model More Context

Unlike cloud models, unlimited (Ollama) models have NO pre-loaded knowledge of your project. You must explicitly provide context.

Step 1: Use Context Mentions

Always include relevant files in your prompts:

@src/main.ts @src/utils/helper.ts

Add error handling to the processData function following the pattern used in this project

Step 2: Provide Examples

Show the model your coding style:

@src/services/userService.ts

Create a new productService.ts file following the exact same pattern as userService.ts

Step 3: Increase Context Window (if you have RAM)

  1. Open Dropstone Agent Settings
  2. Go to Context Management
  3. Set Context Window Size to "Large" (128,000 tokens)
  4. Increase File Limit for Context to 50+
  5. Save settings

Step 4: Let It Learn

Unlimited models in Dropstone Agent have learning capabilities:

  1. Use the agent consistently for 1-2 weeks
  2. Correct it when responses are wrong
  3. The Learning Engine will extract patterns
  4. Agent will gradually understand your project conventions

Issue 10: Downloaded Model Disappeared

Problem: You downloaded a model yesterday, but today it's not showing in Dropstone Agent.

Solution: Verify Ollama Models

Step 1: Check if model exists:

ollama list

If model is missing from list:

  1. It may have been deleted - re-download:

    ollama pull qwen2.5-coder:7b
    
  2. Check Ollama storage location:

    # macOS/Linux
    ls -la ~/.ollama/models
    
    # Windows
    dir %USERPROFILE%\.ollama\models
    

If model exists but doesn't show in Dropstone:

  1. Restart Ollama service:

    pkill ollama
    ollama serve
    
  2. Restart Dropstone Desktop:

    • Quit Dropstone completely (Cmd+Q on Mac, Alt+F4 on Windows)
    • Reopen Dropstone Desktop
    • Check Dropstone Agent settings again
  3. Manually refresh model list in Dropstone:

    • Open Settings → API Configuration → Ollama
    • Click Refresh Models button (if available)
    • Or toggle provider to "Dropstone" then back to "Ollama"

Issue 11: "Out of Memory" Error with Unlimited Models

Problem: Ollama crashes with "out of memory" error, or system becomes unresponsive.

Solution: Reduce Model Size or Memory Usage

Immediate Fix:

  1. Use smaller model:

    # Stop current model
    pkill ollama
    
    # Switch to smaller model
    ollama pull qwen2.5-coder:3b
    
  2. Update Dropstone Agent to use smaller model:

    • Settings → API Configuration → Ollama
    • Select qwen2.5-coder:3b
    • Save

Long-term Solutions:

Option 1: Increase System RAM

  • Unlimited models need significant RAM
  • 7B models: 16GB RAM recommended
  • 32B models: 64GB RAM required

Option 2: Configure Swap (Use Disk as RAM - Slower)

⚠️ Warning: This makes models MUCH slower but prevents crashes

# macOS: Increase swap space
# This is automatic on macOS, but ensure you have 20GB+ free disk space

# Linux: Create swap file
sudo fallocate -l 16G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

# Windows: Increase virtual memory
# Control Panel → System → Advanced → Performance Settings → Advanced → Virtual Memory
# Set to 16GB+ (16384 MB)

Option 3: Use Cloud Models Instead

If your system can't handle unlimited models:

  1. Open Dropstone Agent Settings
  2. Switch provider to Dropstone or AWS Bedrock
  3. These run on cloud servers (cost per use, but no local resources needed)

Issue 12: How to Switch Between Unlimited and Cloud Models

Problem: You want to use unlimited models sometimes (free) and cloud models other times (faster/better).

Solution: Quick Provider Switching

Method 1: Settings Panel (Persistent Change)

  1. Open Dropstone Agent Settings
  2. Go to API Configuration
  3. Select provider:
    • Ollama = Unlimited models (local, free)
    • Dropstone = Cloud models (fast, costs tokens)
    • AWS Bedrock = Cloud models (enterprise)
  4. Click Save
  5. All future requests use selected provider

Method 2: Context Mention (Temporary Override)

Use @mode to temporarily switch:

@mode ollama
Write a Python function to parse JSON

# Uses local Ollama model (unlimited)
@mode dropstone
Analyze this complex codebase and suggest architecture improvements

# Uses Dropstone cloud model (faster, better for complex tasks)

Best Practice:

  • Use Ollama (unlimited) for: Simple tasks, learning, experimentation, privacy-sensitive code
  • Use Dropstone/Cloud for: Complex analysis, critical features, time-sensitive tasks

Still Having Issues?

Collect Diagnostic Information

Before contacting support, gather this info:

For macOS Issues:

# System info
sw_vers
system_profiler SPHardwareDataType | grep "Model\|Memory\|Chip"

# Homebrew info
brew --version
brew doctor

# Dropstone info (if installed)
ls -la /Applications/Dropstone.app

For Unlimited Models Issues:

# Ollama info
ollama --version
ollama list
ollama ps

# System resources
# macOS
vm_stat
# Windows
systeminfo | findstr "Memory"

Get Help

  1. Community Support:

  2. Documentation:

  3. Direct Support:

    • Email: support@dropstone.io
    • Include diagnostic information from above
    • Attach screenshots of errors

Quick Reference

macOS Installation Commands

# Install Homebrew (if needed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Add to PATH (Apple Silicon)
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

# Install Dropstone
brew tap blankline-org/dropstone
brew install --cask dropstone

# Allow in macOS Security (if blocked)
sudo xattr -cr /Applications/Dropstone.app

# Install Rosetta 2 (if needed)
softwareupdate --install-rosetta --agree-to-license

# Upgrade existing Dropstone
brew upgrade --cask dropstone

Unlimited Models Quick Start

# Install Ollama
brew install ollama  # macOS
# or download from ollama.ai for Windows

# Start Ollama service
ollama serve

# Download model (choose based on your RAM)
ollama pull qwen2.5-coder:7b    # 8GB RAM minimum
ollama pull qwen2.5-coder:14b   # 16GB RAM recommended
ollama pull qwen2.5:32b         # 32GB RAM required

# Verify
ollama list

# Configure in Dropstone Agent
# Settings → API Configuration → Select "Ollama" → Choose your model

These solutions cover 95% of common issues. If your problem isn't listed here, check the Troubleshooting Guide or contact support.