Hello everyone,
i’ve recently migrated a project to use flownative/aws-s3 together with CloudFront, using the two-bucket setup. I’m building the architecture initially using Minikube, and later pushing the project to a real Kubernetes cluster.
Now I’m considering the “migration from environment A → environment B” workflow, and would love to hear feedback about best practices.
Here’s my current (partially automated) workflow:
Environment A (source):
flow site:export --site-node vendor-neos --package-key vendor.Neos- Backup of the Persistent/Resources folder
- Dump the PostgreSQL 17 database (using --data-only)
Environment B (target):
- Spin up a fresh DB
- Deploy Neos and run doctrine:migrate
- Run
flow site:import --package-key vendor.Neosusing the export from A - Copy the Resources backup (from A) into Data/Persistent/ of B
- Truncate all media-related tables in the new DB, then import the DB dump from A
Reason: The import would otherwise point to assets with different hashes that do not match the imported dump - Flush caches, publish resources, etc.
My questions:
Now that B uses an S3 bucket, are there changes I should consider in my A → B workflow?
For example, I currently keep references like my site-package in repositories such as:
“repositories”: {
“distributionPackages”: {
“type”: “path”,
“url”: “./DistributionPackages/*”
}
},
Afterwards, I always run resource:clean to delete unused local assets.
I also clear the contents of /Data/Persistent/Resources/* and /Web/_Resources/Persistent/*.
This approach works acceptably, but it feels somewhat hacky.
Is there a cleaner, more robust way to migrate between A and B in the presence of the flownative/aws-s3 two-bucket setup?
Are there known pitfalls, best practices, or commands (e.g. resource:copy, configuration flags) I should use in this migration scenario to avoid inconsistencies, missing assets, cache issues, or hash mismatches?
Any insights, references to docs or real-world experience would be greatly appreciated.
Thanks in advance for your help! ![]()