First, I understand this is probably something better suited for cron. However, I don't have access to cron. (shop rule around here) So, I am using the next best scheduling option at my disposal.
This is the sequence:
I schedule a job to start using "at"
at 5:05 am tomorrow -f /opt/ecommerce/backup/analysis/Data/Scripts/DoDaily.sh
The script executes on time and runs fine up to the point that it should launch another shell script. I might mention, this works perfectly when start the first script from a normal command prompt.
It might be that this is a limitation of using at
as a scheduler (it will not launch other shell scripts within the shell started with at
.
The second to the last line (below) is the script that I am trying to call, but it seems to be ignored. Nothing shows up in my spool output saying it had some sort of error. I have tried running it by:
- Nohup
- calling the script (fully qualified)
If this sort of thing is not possible, I am OK with that too. If I can't figure it out, I usually find an answer using this resource. There some examples using at
, but not in this specific scenario. I did check the man page for at..
Keep in mind -- This is not the "Pretty" final product. I usually troubleshoot to ensure things work as I "think" they should and then surround with error handling and additional comments.
#!/bin/bash
##Clear our the data folder
##
##
cd /opt/ecommerce/backup/analysis/Data
find . -maxdepth 1 -type f -exec rm {} \;
cd /opt/ecommerce/backup/analysis/Data/Scripts
##
./DoABunch.sh BlaBla 1 1017531
./DoABunch.sh BlaBla 1 1020055
##
##
## Copy the edi data to the temp BCfiles folder on the QA interior
scp /opt/ecommerce/backup/analysis/Data/*.zip eXXXXX@xlqxxxxx:/tmp/BCfiles/.
##
## Create a daily folder and put all of the stuff in the daily folder for test artifacts
ssh exxxxx@xlqxxxx 'mkdir -p /opt/ecommerce/backup/analysis/Data/$(date '+%d-%b-%Y')'
ssh exxxxx@xlqxxxx 'chmod -R 2777 /opt/ecommerce/backup/analysis/Data/$(date '+%d-%b-%Y')'
##
## Copy everything gathered to the folder just created
scp /opt/ecommerce/backup/analysis/Data/*.zip e22013@xlqxxxxx:/opt/ecommerce/backup/analysis/Data/$(date '+%d-%b-%Y')/.
scp /opt/ecommerce/backup/analysis/Data/*.txt e22013@xlqxxxxx:/opt/ecommerce/backup/analysis/Data/$(date '+%d-%b-%Y')/.
ssh exxxxx@xlqxxxxx 'chmod -R 2777 /opt/ecommerce/backup/analysis/Data/$(date '+%d-%b-%Y')'
##
nohup /opt/ecommerce/backup/analysis/Data/Scripts/DoFuelDaily.sh &
##
exit;
nohup /opt/ecommerce/backup/analysis/Data/Scripts/DoFuelDaily.sh &
? – roaima Feb 23 at 15:51nohup
will trap and ignore (almost) all signals. I'd suggest it's good practice when the controlling script/program exits while the backgrounded job is intended to continue running. – roaima Feb 23 at 15:55at
to run the script in the background anyway! Run it in "the foreground" i.e. nonohup
and no&
. Letat
take care of waiting for it etc. – wurtel Feb 23 at 15:58exec
instead ofnohup
for the final script call if you're not doing anything after that. Or make another call toat
(with "now") as the time. – MattBianco Feb 23 at 16:05