You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using the example in the readme I've attempted to create a migration object. The controller seems to be selecting random deployments from the same namespace despite the label selector and the target deployment matching correctly.
In this example below I'm just trying to get the job to launch with a simple sleep command just to prove that it works. A deployment named frs with labels app=frs, component=api is deployed in the default namespace, along with 5 other applications. Each has a unique app label though.
In the log snippet below the migration controller indicates it is running using an image from a different deployment than the one the label selector should match. The Job pods are terminating very fast due to some other issue, but long enough for me to see that they are in fact copying the spec of the wrong deployment nearly every time. It seems to be random which one is selected.
INFO controllers.migrator.components.user !!! {"object": "default/frs-migration", "last": "", "image": "XXXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/myorg/auth:e8a87e2"}
The text was updated successfully, but these errors were encountered:
Determined that this is caused by a bad/earlier image that's published to GCR as the latest tag, but is missing some code changes. A local rebuild from main fixed the problems I was seeing.
Using the example in the readme I've attempted to create a migration object. The controller seems to be selecting random deployments from the same namespace despite the label selector and the target deployment matching correctly.
In this example below I'm just trying to get the job to launch with a simple sleep command just to prove that it works. A deployment named
frs
with labelsapp=frs, component=api
is deployed in the default namespace, along with 5 other applications. Each has a uniqueapp
label though.In the log snippet below the migration controller indicates it is running using an image from a different deployment than the one the label selector should match. The Job pods are terminating very fast due to some other issue, but long enough for me to see that they are in fact copying the spec of the wrong deployment nearly every time. It seems to be random which one is selected.
The text was updated successfully, but these errors were encountered: