Sample Header Ad - 728x90

Why does read appear to fail silently in my script called from another script?

1 vote
1 answer
1908 views
Let me paint a picture for your. You write some deployment scripts or build scripts or somethings etc. which run commands on the remote to do things like e.g. create users, install/update package etc. and thus you might be worried that maybe you forgot to add that --gecos '' to your adduser or didn't provide that DEBIAN_FRONTEND=noninteractive for all of your apts or something that you might not even be aware of. Now, such scripts tend to be non-interactive. You don't want to be prompted for anything and in case for some reason prompting happens, you would like the whole operation to fail. However, what I'm seeing with bash in such scenarios is that instead of failing and returning a non 0 error code (even if I set -e), it does the worst possible thing. It interrupts and returns 0. This means the caller has no way of knowing that the script did not complete properly, but in fact was interrupted. Here's a silly example to easily illustrate and toy with the problem. You can imagine bunch of stuff going on around this script. Maybe this bit code is actually executed using ssh on the remote, installing some packages instead of read line etc. and after this block you expect that all the code ran successfully.
shell
set -e
# Do stuf...

/bin/bash <
Output
gonna try to be interactive
return value 0
I already noticed that if I use bin/sh instead of bash I get a different behaviour. It does not automatically immediately stop execution on the read and return 0. In fact by default it will just move on and I need to explicitly set -e to make it "short circuit" and with that it in fact does then give me return value 1. But is there any way to make this work sanely with bash?
Asked by Timo (683 rep)
Sep 6, 2020, 10:55 AM
Last activity: Sep 11, 2020, 09:41 PM