Migration Guides
Step-by-step tutorials for every migration scenario. From simple same-tenant moves to complex cross-tenant M&A migrations.
Migration Scenarios
Same-Tenant Migration
Within your organization
Migrate lists between sites within the same Microsoft 365 tenant. Perfect for reorganizing content, archiving projects, or consolidating data.
Common Use Cases
- Site consolidation and restructuring
- Archiving completed projects
- Creating template lists from existing data
- Hub site migrations
Key Features
- • Uses SharePoint REST API and PnP JS for direct access
- • Automatic user resolution via ensureUser()
- • Lookup field ID mapping preserved automatically
- • No additional authentication setup required
Cross-Tenant Migration
Between organizations
Migrate data between different Microsoft 365 tenants. Essential for mergers, acquisitions, divestitures, and partner data transfers.
Common Use Cases
- Mergers & Acquisitions (M&A)
- Corporate divestitures and spin-offs
- Partner/vendor data transfers
- Tenant consolidation
Requirements
- • Azure AD App Registration in source tenant
- • Microsoft Graph API permissions (Sites.ReadWrite.All)
- • Admin consent for delegated permissions
- • User mapping configuration for people fields
Migration Modes
Structure + Data
Complete migration including list schema, fields, content types, and all item data. The default and most common mode.
RecommendedStructure Only
Create list schema without data. Useful for creating templates or validating field compatibility before full migration.
Pre-flight testingData Only
Migrate items to an existing list. The target list must already exist with compatible field schema.
Incremental updatesIncremental / Delta Migration
Run migrations incrementally — only items changed since your last run are processed. Named baselines let you maintain independent sync points for different workflows against the same lists.
How It Works
Timestamp-based change detection
- 1.Open Advanced Options → Incremental / Delta Migration
- 2.Set Incremental mode to "Modified timestamp"
- 3.Give it a Baseline ID (e.g., "weekly-sync") — or leave as "default"
- 4.Run the migration — SP Toolkit records what was processed and when
- 5.Next run with the same Baseline ID automatically skips unchanged items
Named Baselines
Independent sync points
Each Baseline ID tracks its own independent delta state. This means you can run multiple sync workflows against the same lists without them interfering.
Example Scenarios
- "weekly-sync" — automated weekly incremental sync
- "pre-cutover" — final sync before go-live cutover
- "testing" — disposable baseline for test runs you can reset freely
- "client-abc" — MSPs tracking per-client sync state
Settings Reference
| Setting | Options | Description |
|---|---|---|
| Incremental mode | None / Modified timestamp | None = full migration every time. Modified timestamp = only items changed since the baseline. |
| Baseline ID | Any text (default: default) | Logical name for this sync point. Different names = independent tracking. Use descriptive names like "weekly-sync" or "pre-cutover". |
| Cutoff date (ISO) | ISO date string | Optional hard cutoff — only items modified on or after this date are included, regardless of baseline state. |
| Delta state persistence | localStorage / Memory | localStorage = survives browser close (recommended). Memory = lost when you close the tab. |
| Force re-evaluate | On / Off | When enabled, re-processes items even if their Modified timestamps match the baseline. Useful for troubleshooting. |
Tips
- • Use "Refresh baseline stats" to see how many items/lists are tracked
- • Baselines are saved per browser — if you switch machines, the baseline won't carry over
- • Combine with Data Only migration mode for ongoing sync scenarios
- • Save your migration profile with the baseline ID included for repeatable runs
Watch Out
- • Clearing browser data will erase localStorage baselines — use "Export Profile" to back up your config
- • If you rename a Baseline ID, it starts a new empty baseline — the old one still exists under the old name
- • "Reset baseline" is irreversible — it clears all tracking for that baseline ID
- • Items deleted in the source won't be detected by timestamp-based mode — use Match-Based Sync for deletions
Special Content Migration
Document Library Migration
Full support for document libraries including files, folders, and metadata.
- Chunked upload for large files (>2MB)
- Recursive folder structure preservation
- Full metadata preservation
- Automatic check-in after metadata update
- Auto-publish for minor version libraries
Version History
Preserve document and list item version history during migration.
- File version migration (configurable)
- List item version replay
- Version cap limits (maxFileVersions, maxListItemVersions)
- Check-in comments preserved
- Chronological replay
Attachments
Migrate list item attachments with full fidelity.
- Direct binary upload to target
- Per-attachment error handling
- Aggregate size tracking
- Optional enable/disable
User/People Fields
Intelligent user resolution across tenant boundaries.
- Email-based resolution (primary)
- Login name construction fallback
- User mapping cache for performance
- Fallback display names for unresolved users
- Multi-user field batch processing
Lookup Fields
Handle complex lookup relationships with automatic ID mapping.
- Circular dependency detection & resolution
- Source-to-target ID translation
- Title index cache for performance
- Multi-lookup array handling
- Deferred backfill for circular lookups
Rich Text Content
Preserve HTML content with asset reference handling.
- HTML normalization and cleanup
- Asset reference detection
- URL rewriting for migrated content
- Stats and warning capture
List Views
Automatically copy source list views to the target.
- All views copied (default, custom, personal)
- Column ordering preserved
- Filters and sorting maintained
- View field usage tracking
- Missing field detection
Best Practices
Do
- ✓Run pre-flight validation before every migration
- ✓Test with "Structure Only" mode first for cross-tenant
- ✓Use smaller batch sizes (50-100) for complex lists
- ✓Review and adjust field mappings manually for critical data
- ✓Export fallback reports to verify user/lookup resolutions
- ✓Schedule large migrations during off-peak hours
Avoid
- ✗Migrating without testing on a small sample first
- ✗Using aggressive batch sizes on throttling-prone tenants
- ✗Ignoring validation warnings
- ✗Running migrations during business hours for large datasets
- ✗Skipping lookup dependency review step
- ✗Assuming all users will resolve in cross-tenant scenarios
Related Documentation
Field Types Reference
Complete list of supported field types and how they're handled.
View Reference →