~/bin vs. ~/.local/bin for user bash scripts?


For one user account, I want to have some bash scripts, which of course would be under version control.

The obvious solution is just to put the scripts in a git repository and make ~/bin a symlink to the scripts directory.

Now, it seems on systemd systems ~/.local/bin is supposedly the directory for user scripts.

My question, is mostly, what are the tradeoffs between using ~/bin and ~/.local/bin as directory for my own bash scripts?

One simple scenario I can come up with are 3rd party programs which might modify ~/.local/bin and put their own scripts/starters there, similar to 3rd party applications which put their *.desktop files in ~/.local/applications.

Any advice on this? Is ~/.local/bin safe to use for my scripts or should I stick to the classic ~/bin? Anyone has a better convention?

(Btw.: I am running Debian everywhere, so I do not worry about portability to non systemd Linux systems.)

in reply to The Ramen Dutchman

Package managers tend to assume they are the only ones touching files in /usr/share. You will find if you try to change any files there, the next update may delete or download a new version of the file, stomping your changes. Instead your local changes should go in /usr/local (if you want something system-wide) or ~/.local (if it only applies to a specific user).

Ex. If you made a custom .desktop file to show up in your app launcher, or a custom .xsession file to show up in a login manager.

This entry was edited (2 weeks ago)
in reply to wolf

Personally, I put a ~/.get-going or whatever you want to call it and put all my scripts in there. Name them with numbers first like “10-first.sh” “20-second.sh” and then just put a line in .bashrc or .zshrc or whatever you like. Aliases and any critical stuff last. Then one line in your rc file can include them all.

I made some bash scripts for distro-hopping that are now [undiscloded] years old so I can basically backup a few folders — the second being ~/bin where I put AppImages and stuff and sometimes ~/Development (I don’t always need the dev one because backups of those exist as repos) folder if I need to reinstall. A lot of people backup their whole home directory. But I prefer my method and that’s why we use Linux. I don’t want my settings for every app coming with me when I go on a new journey. Choose your own adventure.

This entry was edited (2 weeks ago)
in reply to wolf

Neither ~/bin or ~/.local/bin are part of most shell's default $PATH so you're going to have to modify the user's shell profile (or rc) to include it. It's possible that your favorite distro includes it but not mine. For example(s):

﬌ unset PATH                                             

﬌ /bin/bash --noprofile --norc         
bash-5.2$ echo $PATH
/usr/local/bin:/usr/bin

or
﬌ unset PATH

﬌ /bin/zsh --no-rcs --no-global-rcs
Sinthesis% echo $PATH
/bin:/usr/bin:/usr/ucb:/usr/local/bin

﬌ ls -l /bin
lrwxrwxrwx. 1 root root 7 Jan 23  2024 /bin -> usr/bin

That was on Fedora. The funny thing is /bin is soft linked to usr/bin, weeeee.

This is on Debian

Sinthesis@debian:~$ /bin/bash --noprofile --norc
bash-5.2$ echo $PATH
/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:.

I'm not sure why you're bringing the XDG or systemd "standard" into this. POSIX standard would be more appropriate but they don't say anything on the matter, nor should they really. The most important thing is, be predictable. If the user has a problem with one of your scripts, what do they do first? which wolf_bin will show them the full path to the script. So really, the location does not matter much.

That said I would go with one of these two options:

1) Make a package for your distro. This may be overkill for a couple scripts but you did say they're in a git repository so you could automate it. The package would install to /usr/bin which would require sudo or root. If the scripts are only allowed to be run by one user, set the rwx and group permissions.

2) A pattern I like, especially for lightweight things such as scripts that don't require compiling or OS management and also are using git; a "hidden" or "dot" directory in the user's home where the repo lives e.g. ~/.lemmywolf/ Then add scripts directory to the user's $PATH e.g. PATH=$PATH:~/.lemmywolf/scripts. This is what some fairly large projects like pyenv or volta do. You could take it a step farther and modify this installer script to your liking github.com/pyenv/pyenv-install…

/edit 20 year Linux user (Redhat AS2.1) and 5 years of Unix (HPUX & Solaris) before that.

/edit2 I just noticed the pyenv-installer does not modify the user's shell profile. That could easily be added to the script though.

This entry was edited (2 weeks ago)
in reply to Sinthesis

I’m not sure why you’re bringing the XDG or systemd “standard” into this.


Probably because in their "basedir" specification they do recommend ~/.local/bin to be in $PATH. I'm sure there's more than one distro following that spec, whether you'd consider it standard or not.

User-specific executable files may be stored in $HOME/.local/bin. Distributions should ensure this directory shows up in the UNIX $PATH environment variable, at an appropriate place.
This entry was edited (2 weeks ago)
in reply to Akatsuki Levi

If I hand write bash scripts, or for those single binary downloads, they'll go into ~/bin. ~/.local is already used by a ton of packages. This helps a ton when it comes to backups or for just finding where I put stuff.

My ~/.local is 283 GB, it's where podman/docker/etc put containers, it may as well be a system managed folder at that point. My ~/bin is only 120 MB and is a lot simpler to backup/restore/sync to other desktops.

in reply to wolf

Personally I put scripts in ~/.local/bin/scripts/ instead of just ~/.local/bin/ because I like to keep them separate from other binaries. To note: even though ~/.local/bin/ is in PATH, it's subfolders are not, so if you do that you need to add the scripts subfolder to PATH if you want to run the scripts directly.

Well actually my scripts are in mydotfilesrepo/home/.local/bin/scripts, and I use GNU Stow to symlink mydotfilesrepo/home to /home/myuser/ (same for mydotfilesrepo/etc/ and mydotfilesrepo/usr/ which are symlinked to /etc and /usr), but it's the same result. Stow is pretty cool for centralizing your configs and scripts in one repo !

I've never seen ~/bin before so I can't comment on whether it's a good idea.

in reply to wolf

I migrated to fish recently and at first I was really annoyed that I had to decompose my ~/.bash_aliases into 67 different script files inside ~/.config/fish/functions/, but (a) I was really impressed with the tools that fish gave me to quickly craft those script files (-

~> function serg
    sed -i -e "s/$1/$2/g" $(rg -l "$1")
end
~> funcsave serg
funcsave: wrote ~/.config/fish/functions/serg.fish

) - and (b) I realized it was something I ought to have done a while ago anyway.

Anyway, all this to say that fish ships with a lot of cool, sensible & interesting features, and one of those features is a built-in place for where your user scripts should live. (Mine is a symlink to ~/Dropbox/config/fish_functions so that I don't need to migrate them across computers).