Hello,
We have Control-M Agent 6.3.01.300 installed on two Windows 2008 servers which are clustered in an active/passive fashion. The jobs are pointed to the VIP name. We noticed that jobs would run and turn green, but were not appearing to execute. Upon further investigation, the jobs were running, but on the server that was marked as passive. The Agent Host Name is the server name for each agent, and the Logical Agent name is the VIP name, which is the same on both. The jobs use the VIP name as the node to run on. What I want to know is, is there anything in the Agent (or the Control-M Server) that retains where to run the job? I'm at a loss as to why the job would run on the passive server if the VIP is set to point to the active server. Thanks for any insight.
Jobs directed to Windows Cluster running on passive server
That means you have two agents sharing the same logical name!! Control-M/Server will be confused with this configuration.The Agent Host Name is the server name for each agent, and the Logical Agent name is the VIP name, which is the same on both.
For an agent that should run on the cluster active node, the agent should be installed for the cluster. This is the first question from the installation wizard.
I installed agent on windows servers many times, so i do not understand, what can be your problem.
Do you have Cluster Resources defined when defining Cluster?
In these resources should be also the Agent Services.
So, be sure, that your agent is started only when resources are on desired Virtual Node (where the VIP is).
If you have such configuration, there is no way to start agent on other node, than on active.
But, you have to remember, that when you switch to other node - the job status desapears (if you have not filesystem migrated over cluster nodes).
So, when you switch to this node again, it can return the status of previously executed jobs.
Do you have Cluster Resources defined when defining Cluster?
In these resources should be also the Agent Services.
So, be sure, that your agent is started only when resources are on desired Virtual Node (where the VIP is).
If you have such configuration, there is no way to start agent on other node, than on active.
But, you have to remember, that when you switch to other node - the job status desapears (if you have not filesystem migrated over cluster nodes).
So, when you switch to this node again, it can return the status of previously executed jobs.
- philmalmaison
- Nouveau
- Posts: 1148
- Joined: 08 Jun 2007 12:00
- Location: Ile de France
the question is , the agent FS isn't in the cluster scope, (i assume you install an agent on both system, on the same VIP).
if it's not, use only physical address for the agents,add the primary server physical address on primary agent and authorize the second (failover phsycal address)
do the same on failover add physical failover controlm server address as the primary, and authorize the second server (primary physical address)
it should work fine.
regards
philmalmaison
if it's not, use only physical address for the agents,add the primary server physical address on primary agent and authorize the second (failover phsycal address)
do the same on failover add physical failover controlm server address as the primary, and authorize the second server (primary physical address)
it should work fine.
regards
philmalmaison