Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems. Join them; it only takes a minute:

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

I was messing around with PowerShell this week and discovered that you are required to Sign your scripts so that they can be run. Is there any similar secure functionality in Linux that relates to preventing bash scripts from being run?

The only functionality similar to this, that I'm aware of is that of SSH requiring a certain key.

share|improve this question
2  
Sounds a bit like an ad-hoc solution to package signing to me. I don't know if Windows has cryptographic package signing the way Linux has. – Wildcard yesterday
5  
@leeand00 A script is a special case of a software package and I can't see any point to singling out that case. – Gilles yesterday
2  
The mechanism I'm most fond of is the way ChromeOS does this -- putting the only filesystem not flagged noexec on a read-only partition on a dm-verity signed block device. – Charles Duffy yesterday
1  
source.android.com/security/verifiedboot talks about Android's adoption of that (initially ChromeOS) feature. – Charles Duffy yesterday
1  
You can consider bash as a bunch of commands that can be typed manual in command line interface. What's the point to restrict the scripts when you can type the contents in command line anyway? – Ding-Yi Chen yesterday

Yes and no.

Linux software distribution works somewhat differently from Windows software distribution. In the (non-embedded) Linux world, the primary method to distribute software is via a distribution (Ubuntu, Debian, RHEL, Fedora, Arch, etc.). All major distributions have been signing their packages systematically for about a decade.

When software is distributed independently, it's up to the vendor to decide how they'll ship their software. Good vendors provide package sources that's compatible with the major distributions (there's no unified distribution mechanism for all of Linux: software distribution is one of the main points of differentiation between distributions) and that are signed with the vendor's key. Linux distributions rarely act as a signing authority for third-party vendors (Canonical does this with Ubuntu partners, but that covers very few vendors), and I think all major distributions use the PGP web of trust rather than the TLS public key infrastructure, so it's up to the user to figure out whether they want to trust a key.

There's no special mechanism that singles out software packages that consist of a single script from software packages that consist of a native executable, a data file, or multiple files. Nor is any signature verification built into any common script interpreter, because verifying a software package is a completely orthogonal concern from running a script.

I think Windows annotates files with their origin, and requires user confirmation to run a file whose origin is “downloaded” rather than “local”. Linux doesn't really have a similar mechanism. The closest thing is execution permission: a downloaded file does not have execution permission, the user needs to explicitly enable it (chmod +x on the command line, or the equivalent action in a file manager).

share|improve this answer
2  
FWIW, on top of this PowerShell can be configured (by policy settings) to only execute signed scripts and this policy can be configured so that all scripts must be signed or only "remote origin" scripts, or no scripts. It works best in an AD environment with key management and central policy management. It can be bypassed :-) – Stephen Harris yesterday
    
@StephenHarris Well yeah if you set It to bypass... – leeand00 yesterday
    
@leeand00 - Apparently base64 encoding also works as a bypass but I don't know if that's been closed in newer versions of PowerShell. – Stephen Harris yesterday
1  
@leeand00 - see darkoperator.com/blog/2013/3/5/… for some fun :-) Basically pass the base64 encoded script as a parameter on the command line :-) Easy enough to wrapper! – Stephen Harris yesterday
2  
SeLinux annotates files with their origin. It's one of its main premises. – loa_in_ yesterday

Linux does not provide the capability to limit the execution of bash scripts based on digital signatures.

There is some work on authenticating binary executables. See https://lwn.net/Articles/488906/ for info.

share|improve this answer
    
Upvote for direct answer without suggesting a hackey work-around. – user394 yesterday

In a word, "no".

Linux doesn't really differentiate between executables and scripts; the #! at the beginning is a way to tell the kernel what program to run to evaluate the input but it's not the only way a script can be executed.

So, for example, if I have a script

$ cat x
#!/bin/sh 
echo hello

Then I can run this with the command

$ ./x

That will cause the kernel to try and execute it, spot the #! and then effectively run /bin/sh x instead.

However I could also run any of these variants as well:

$ sh ./x
$ bash ./x
$ cat x | sh
$ cat x | bash
$ sh < x

or even

. ./x

So even if the kernel tried to enforce signing at the exec layer we can bypass this by only running the interpreter with the script as a parameter.

This means that signing code would have to be in the interpreter itself. And what would stop a user from compiling their own copy of a shell without the signing enforcement code?

The standard solution to this isn't to use signing, but to use Mandatory Access Controls (MAC), such as SELinux. With MAC systems you can specify exactly what each user is allowed to run and transition layers. So, for example, you can say "normal users can run anything but the web server and CGI processes can only access stuff from the /var/httpd directory; everything else is rejected".

share|improve this answer
1  
This means that signing code would have to be in the interpreter itself. And what would stop a user from compiling their own copy of a shell without the signing enforcement code? Not allowing executing of any unsigned executables would do it, If the user doesn't have the signing key. There are various *nix projects for this already. – user3137702 yesterday

Linux distros usually have gnupg. It sounds to me like all you want is a simple bash wrapper that checks a detached gpg signature against the argument script and only proceeds to run the script if the check succeeds:

#!/bin/sh
gpgv2 $1.asc && bash "$@"
share|improve this answer
    
The only thing this doesn't present is running a script that somebody just made...on their own... – leeand00 yesterday

The counter-question that comes to mind immediately is "Why would you ever want to prevent users from running programs they wrote?" Several possibilities exist:

  1. It is literally impossible to detect who authored the code in the first place. The owner of the script file is just whoever actually saved the content of that file, regardless of where it came from. So enforcing a signature is just a complicated substitute for a confirmation dialogue box: "Are you sure you want to do this?" In Linux part of this problem is solved transparently with signed packages, and mitigated by the fact that users have limited access by default. The user is also expected to know that running others' code can be dangerous*.
  2. In the same vein signing a script is a much more complex operation than saving a file. In the best case this prompts the user to realise that they are performing an action similar to signing a document, and should inspect what it says before continuing. Most likely it simply ensures a very minimal level of technical proficiency on the part of the user to be allowed to run the script. In the worst case it demonstrates a willingness to jump through a long series of hoops to run what they wanted to. Technical proficiency is assumed on Linux*.
  3. It is more likely that people will detect obviously malicious code when typing/pasting a series of commands to their command line. Plaintext snippets meant to be copied and pasted usually are smaller than the series of commands necessary to do something properly nefarious. The user can also carefully copy and paste every line separately, understanding what happens as it happens. With a script it's possible the user has never looked at the code at all. This may be a useful application of signed scripts, at the all-too-common cost of complacency after the 12th time you have to do it.

* This is probably becoming less and less true as more people start using Linux

share|improve this answer

The reason the systems have evolved differently is that Linux has the 'exec' file attribute and Windows uses file extensions to determine executability.

So in Windows it's easy to trick the user into downloading a file with an ".exe", ".bat", ".scr" extension, which will be hidden by default. Double-clicking that file would give you arbitary code execution. Hence a large mechanism of origin tracking and executable / script signing was built to mitigate this risk.

On Linux, you might be able to get a file to the user, but you can't easily force the 'exec' bit to be set. Additionally, it's possible to make entire filesystems 'noexec'.

You can in any case run a script explicitly by invoking the interpreter. You can even create shell scripts at runtime and pipe them into "sh", or run "sh -c".

share|improve this answer

By custom, many archiving programs do not preserve execute bit on contained files. This makes it impossible to run arbitrary executables. Well almost.

The point is, what was described in another answer that lack of execute bit doesn't prevent you from passing such script directly to bash. While it's arguable that most such scripts are bash scripts, the shebang can specify any program as interpreter. This means that it's up to the user to run appropriate interpreter if they decide to ignore the executable semantics.

While this isn't much, this pretty much covers prevention of running untrusted executables on *nixes with just kernel and shell.

As I mentioned in one of the comments, there is one other layer of protection -- SeLinux -- which tracks the origin of files based on a set of rules. A set up SeLinux would not for example allow root to run an executable with executable bit set that was downloaded from the internet, even if you copy and move the file around. One can add a rule that such files can only be run through another binary that would check the signature, not unlike what you mentioned in your question.

So, in the end it's a matter of configuration of commonly preinstalled tools, and the answer is yes.

share|improve this answer
    
many archiving programs do not preserve execute bit on contained files .. well, that's kind of a handicap when you actually want to use it for archiving. Fortunately tar does preserve the execute bit. – pjc50 18 hours ago
    
You have to use tar -p source – loa_in_ 18 hours ago
    
-p, --preserve-permissions, --same-permissions means extract information about file permissions (default for superuser) – loa_in_ 18 hours ago
    
No, you do NOT need -p. I see what the man page says, but it's not what happens. touch permtest; chmod +x permtest; tar cf permtest.tar.gz permtest; rm permtest; tar xf permtest.tar.gz; ls -l permtest - it's executable here, and I'm not root. – domen 17 hours ago
    
I'll try to improve my answer then. – loa_in_ 8 hours ago

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.