Creating a cluster-aware scheduled task has several benefits and has historically been quite difficult. The volume shadow copy service task resource type in Windows Server 2003 clusters provides a mechanism to allow scheduled task capability as a cluster resource. Despite the name, this resource type seems to be a generic cluster resource that provides access to the standard task scheduler interface to schedule and run any command within a resource group. Notes: References: Volume Shadow Copy Service resource type Using Shadow Copies of Shared Folders in a server cluster Scheduled task does not run after you push the task to another computer Scheduled Task for the Shadow Copies of Shared Folders Feature May Not Run on a Windows Server 2003 Cluster Behavior of the LooksAlive and IsAlive functions for the resources that are included in the Windows Server Clustering component of Windows Server 2003
This post provides information on creating the cluster resource using the cluster.exe command-line interface, some best practices (in my opinion) - preventing this resource from affecting the group, network and disk dependencies, using local scripts and some background in the LooksAlive/IsAlive functions provided by this resource type.
cluster /cluster:%Cluster% res "TaskName" /create /group:"BNE-VFP03-CL4" /type:"Volume Shadow Copy Service Task"
cluster /cluster:%Cluster% res "%TaskName%" /priv ApplicationName="cmd.exe"
cluster /cluster:%Cluster% res "%TaskName%" /priv ApplicationParams="/c Command-Batch-Or-Script"
cluster /cluster:%Cluster% res "%TaskName%" /priv CurrentDirectory=""
cluster /cluster:%Cluster% res "%TaskName%" /prop Description="Task Description"
cluster /cluster:%Cluster% res "%TaskName%" /AddDep:"%NetworkName%"
cluster /cluster:%Cluster% res "%TaskName%" /AddDep:"%PhysicalDisk%"
cluster /cluster:%Cluster% res "%TaskName%" /prop RestartAction=1
cluster /cluster:%Cluster% res "%TaskName%" /On
Use the cluster administrator GUI, this cannot currently be set with cluster.exe in Windows Server 2003 clusters.
cluster /cluster:%Cluster% res "%TaskName%" /Off
cluster /cluster:%Cluster% res "%TaskName%" /On
Cluster resource
http://technet2.microsoft.com/windowsserver/en/library/f6b35982-b355-4b55-8d7f-33127ded5d371033.mspx?mfr=true
http://technet2.microsoft.com/windowsserver/en/library/bc7b7f3a-d477-42b8-8f2d-a99748e3db3b1033.mspx?mfr=true With the Volume Shadow Copy Service Task resource type, you can create jobs in
the Scheduled Task folder that must be run on the node that is currently hosting
a particular resource group. In this way, you can define a scheduled task that
can failover from one cluster node to another. However, in the Microsoft®
Windows Server 2003 family of products, the Volume Shadow Copy Service Task
resource type has limited capabilities for scheduling tasks and serves primarily
to support Shadow Copies of Shared Folders in a server cluster. If you need to
extend the capabilities of this resource type, consider using the Generic Script
resource type
http://technet2.microsoft.com/windowsserver/en/library/66a9936d-2234-411f-87b4-9699d5401c8c1033.mspx?mfr=true
http://support.microsoft.com/kb/317529
http://support.microsoft.com/kb/828259
http://support.microsoft.com/kb/914458
Wayne's World of IT (WWoIT), Copyright 2008 Wayne Martin.
Information regarding Windows Infrastructure, centred mostly around commandline automation and other useful bits of information.
4 comments:
Hello there, excellent information here on clustered task scheduler services.
I am searching high and low for anyone that may have had success setting up a cluster environment similiar to what I am looking to acheive.
My team and I are responsible for a Task Scheduler cluster within my department that takes allot of uneeded time from us with simple requests from tons of developers to create and edit jobs running on these boxes.
Pretty simple environment, a seperate two node cluster for Development, Test, and Production (6 nodes total, 3 seperate clusters).
When creating/editing jobs, we have to do so on all three active nodes, and then back track, and start the services on the inactive nodes in order to edit those as well. All jobs are using service accounts in AD for authentication, and each job is created in all three environments using the same account.
The applications or content that these jobs are running are on a local SAN disk that exists only on the active node. During a failover, this disk is removed from the inactive node and becomes active on the other node.
My first idea to streamline the management of this environment, was to relocate all of the jobs to the SAN drive as well, so that they were only available to the active node, and we would cut create/edit time in less than half. This also would give us the ability to keep from having issues with jobs not matching on each node which happens allot, and only surfaces when the failover occurs.
Although this seems like an easy fix, I found that the account information for each job was indeed stored in the local Protected Storage Database and would cause the authentication on all our jobs to fail once a failover took place.
I guess what I am hoping for, is some way to manually create the security descriptors on both active and inactive nodes when creating the jobs so that there is no extra work needed when the failover takes place.
So far, its looking like I may be a fanatical nutt job here, trying to accomplish the impossible. But based on your apparent knowledge of this subject, I'm hoping you may have a few idea's.
Sorry for the novel, and thanks ahead of time for any consideration you have on this.
Best Wishes,
J
Hi John,
I’ve not had to do what I think you’re after, but some ideas:
1. Run the scheduled tasks as 'nt authority\system', and then grant the computer accounts access to any network resources required to execute the jobs. A task running under this context should fail over without issue.
2. Create a generic application cluster resource that runs a batch script (or vbscript/powershell etc) to create and maintain your scheduled tasks on the local node. As this cluster resource fails to another, it would automatically create the tasks and set appropriate contexts.
Neither are great from a security perspective, but it might give you something else to try.
You could easily accomplish the second with a series of schtasks.exe commands:
1. Delete all existing tasks, either from a static list or dynamically.
2. Re-create each scheduled task with schtasks.
The command below could be used to delete all local tasks, change the echo to a schtasks /delete command:
for /f "tokens=1,*" %i in ('"schtasks /query /fo list | find /i "taskname:""') do echo %j
Then you would re-create each current scheduled task. This has the advantage of ensuring that all your scheduled tasks are current, there are no extra scheduled tasks and if anybody is tinkering then it will be reset on the next failover.
Note that you would need to run the batch in the cluster generic application with 'cmd /k' to ensure the job never finished (and then stopped the cluster resource).
This would give you a cluster-aware application managing local scheduled tasks on the node hosting the resource.
A few other random thoughts:
1. Stored passwords might be of some use for the network access. Ie you could persist the accounts/passwords to use for system when connecting on the network with cmdkey.exe (I haven’t tried this).
2. In the second option above, you could at least secure the batch file such that none of your pesky developers could get to it.
So in summary I don’t know of any way to persist credentials across nodes using the lsass protected storage (and if there was something it would be a hack and unsupported I guess), and hopefully my other rambling gave you something else to think about.
Actually Wayne, your second idea sounds like possibly the best approach here. Although it doesnt decrease the number of locations the jobs are stored in each cluster, it does gives us one location to update while also removing the possibility of mismatched jobs on the active and inactive servers.
I wonder is there is already script or program someone has created that allows us to input the task variables, and submits schtask command to the server for creation. If so, I would then only need to add additional commands to copy that text into the batch file that runs during failover and viola, we have a single job creation tool that updates all servers...
Hi John,
The control file and batch below would be a very simple start to creating any number of local or remote scheduled tasks based on a control file, after first deleting all existing tasks
Control File:
TaskName,Username,Password,Repeat,Time,Cmd
Task1,SchTaskUser,SchTaskPwd,Daily,05:30:00,do something
:: CreateTasks.bat
Set AdminLog=C:\Admin\Logs
if not exist %AdminLog% md %AdminLog%
for /f "tokens=1-8 delims=/:. " %%i in ('echo %date%') do Set DateFlat=%%l%%k%%j
Set LogFile=%AdminLog%\%~n0_%DateFlat%.log
Echo %Date% %Time%: Create Tasks starting >> %LogFile%
:: Delete each task, logging to the log file
for /f "tokens=1,*" %%i in ('"schtasks /query /fo list | find /i "taskname:""') do echo Deleting "%%j" >> %LogFile% & echo schtasks /delete /tn "%%j"
:: To delete all tasks you could also run: schtasks /delete /TN * /F
:: For each line in the control file, run the schtasks command
for /f "tokens=1-5,* delims=," %%i in (c:\temp\tasks.txt) do echo Creating "%%i" & echo schtasks /create /tn "%%i" /ru "%%j" /rp "%%k" /sc %%l /st %%m /tr "%%n"
Echo %Date% %Time%: Create Tasks finishing >> %LogFile%
Post a Comment