Brownfield environment is show in the diagram below: (I clicked my way through the deployment, I have to confess :():

ASA:
I modified a few items from the configuration generated by AWS, mainly:
interface Management0/0
no management-only
nameif management
security-level 0
ip address dhcp setroute
!
interface TenGigabitEthernet0/0
nameif internal
security-level 100
ip address dhcp setroute
!
interface Tunnel100
nameif tunnel100
ip address 169.254.48.110 255.255.255.252
tunnel source interface management
tunnel destination 3.208.50.157
tunnel mode ipsec ipv4
tunnel protection ipsec profile PROFILE1
!
interface Tunnel200
nameif tunnel200
ip address 169.254.242.78 255.255.255.252
tunnel source interface management
tunnel destination 3.220.28.236
tunnel mode ipsec ipv4
tunnel protection ipsec profile PROFILE1
!
interface Tunnel300
nameif tunnel300
ip address 169.254.16.234 255.255.255.252
tunnel source interface management
tunnel destination 52.21.63.18
tunnel mode ipsec ipv4
tunnel protection ipsec profile PROFILE1
!
interface Tunnel400
nameif tunnel400
ip address 169.254.92.46 255.255.255.252
tunnel source interface management
tunnel destination 54.210.195.29
tunnel mode ipsec ipv4
tunnel protection ipsec profile PROFILE1
!
router bgp 65000
bgp log-neighbor-changes
bgp graceful-restart
address-family ipv4 unicast
neighbor 169.254.92.45 remote-as 64512
neighbor 169.254.92.45 ebgp-multihop 255
neighbor 169.254.92.45 timers 10 30 30
neighbor 169.254.92.45 activate
neighbor 169.254.242.77 remote-as 64512
neighbor 169.254.242.77 ebgp-multihop 255
neighbor 169.254.242.77 timers 10 30 30
neighbor 169.254.242.77 activate
neighbor 169.254.48.109 remote-as 64512
neighbor 169.254.48.109 ebgp-multihop 255
neighbor 169.254.48.109 timers 10 30 30
neighbor 169.254.48.109 activate
neighbor 169.254.16.233 remote-as 64512
neighbor 169.254.16.233 ebgp-multihop 255
neighbor 169.254.16.233 timers 10 30 30
neighbor 169.254.16.233 activate
redistribute connected
no auto-summary
no synchronization
exit-address-family
!
crypto ipsec ikev2 ipsec-proposal SET1
protocol esp encryption aes
protocol esp integrity sha-1
crypto ipsec profile PROFILE1
set ikev2 ipsec-proposal SET1
set pfs group14
set security-association lifetime seconds 3600
crypto ipsec security-association replay window-size 128
crypto ipsec security-association pmtu-aging infinite
crypto ipsec df-bit clear-df management
crypto ikev2 policy 200
encryption aes
integrity sha
group 14
prf sha256
lifetime seconds 28800
crypto ikev2 policy 201
encryption aes
integrity sha
group 14
prf sha256
lifetime seconds 28800
crypto ikev2 enable management
!
group-policy AWS internal
group-policy AWS attributes
vpn-tunnel-protocol ikev2
tunnel-group 3.208.50.157 type ipsec-l2l
tunnel-group 3.208.50.157 general-attributes
default-group-policy AWS
tunnel-group 3.208.50.157 ipsec-attributes
isakmp keepalive threshold 10 retry 10
ikev2 remote-authentication pre-shared-key *****
ikev2 local-authentication pre-shared-key *****
tunnel-group 3.220.28.236 type ipsec-l2l
tunnel-group 3.220.28.236 general-attributes
default-group-policy AWS
tunnel-group 3.220.28.236 ipsec-attributes
isakmp keepalive threshold 10 retry 10
ikev2 remote-authentication pre-shared-key *****
ikev2 local-authentication pre-shared-key *****
tunnel-group 52.21.63.18 type ipsec-l2l
tunnel-group 52.21.63.18 general-attributes
default-group-policy AWS
tunnel-group 52.21.63.18 ipsec-attributes
isakmp keepalive threshold 10 retry 10
ikev2 remote-authentication pre-shared-key *****
ikev2 local-authentication pre-shared-key *****
tunnel-group 54.210.195.29 type ipsec-l2l
tunnel-group 54.210.195.29 general-attributes
default-group-policy AWS
tunnel-group 54.210.195.29 ipsec-attributes
isakmp keepalive threshold 10 retry 10
ikev2 remote-authentication pre-shared-key *****
ikev2 local-authentication pre-shared-key *****
Testing
Once the configuration is applied to the ASAv, we see the Site-to-Site VPN connections, after a few seconds, come up online:

A VM running behind the on-prem ASA firewall can ping the VMs running on AWS:

Aviatrix Deployment
Transit and Firenet can be deployed using the following code:
Site-2-Cloud
Once the AVX transit is deployed, the next step is to connect it to on-prem:

S2C config:


Checking:

The config for the remote device is downloaded and then applied to the “on-prem” ASA:

Once the config is applied we can see the new bgp peers:
- 169.254.38.66 and 169.254.45.178

Because the on-prem ASA actually runs on AWS, I had to tell Aviatrix to use as identity the private IP address not the default Public IP as the remote identifier.

Deploy Gateways
We will “inject” avx gws into the existing VPCs using “empty” subnets. If there is no room for a new subnet we can add another CIDRs to the VPC:

We can deploy spokes also using Terraform into an existing VPC. Once the gateways are deployed, we have the following setup:

Gateways are not attached to the transit at this time.
Disable Spoke Advertisement
Preparing for the spoke attachment, we can disable the VPC CIDR adverstisement using the “Customize Spoke Advertised VPC CIDRs” and list only the subnet where the gateways were deployed:

Attachment
The environment once the gateways are attached is show below:

Checking the ASA routes:

Checking vpc routes:
- Existing routes/more speficifs point to the vgw
- RFC1918 routes were added to the routing table pointing to the aviatrix gateway

Cut-Over
There are two, maybe three, steps in this step of the overall migration process:
- advertise all prefixes from Aviatrix spokes
- shutdown ASA tunnels
- disable route propagation
After those steps:
- firewall


- VPC

Clean up
The last step is to remove VPN connections and VGW from the environment:

There are several variations of the migration and the entire process can also be automated to cause minimum if not any disruption to the app flows.