In a migration there are three risks that must be controlled yes or yes.
In the development environment, use a original service thinking that you are using a migrated one. In this scenario the tests will work flawlessly and the go-live day will be plagued with errors because it hasn’t been properly tested, actually it hasn’t been tested!
In production, there are two. Corrupting with test data the original live production environment and after the go-live saving production data in already migrated or development systems.
Solutions to mitigate or avoid the problems related to those risks?
One is simple, cosmetic but it works wonders. Everybody remember OMG Ponies! So choose an official RDC migration colour, the camper the better and change the background colour on the migration branch of each app. Nobody will be able to say that it was a mistake doing tests in a real app thinking they were attacking its migrating counterpart. Also if by error someone promotes an app with real properties (from a source code branch different to the migration one) it’ll be detected instantly.
The second is like putting a profilatic over your systems. Yes, internal firewalls. Netfilter/iptables for the win. Using Windows? Sorry. Twice (the first one for having to use it). Iptables is a bit rough, I prefer using shorewall. A collection of scripts for configuring iptables using policies and high level objects.
The different systems will be in different networks ( if they aren’t, the migration is trivial or there are more serious issues to think about… Like abandoning the boat as fast as possible). In the development environment, reject (and log) egress connections to the original network range is usually enough. If instead of rejecting the conn, it’s dropped, you get timeouts, it’s usually better a fast log with a connection reset in it than waiting minutes not knowing what’s happening. Production’s environment rules have a different scope, all outward connections are blocked by default, and only selected network ranges are opened explicitly.
Derivative accesses are still a risk, more if they are managed by other departments with other standards. Some can be tracked per application… others are a matter of faith.
Tracking all those rejected conns can be done easily at Kibana. Just add to the automatic process (I use fabric) for configuring shorewall/iptables on all the servers, the option to relay the shorewall log to lumberjack/logstash (see Warlogs).