Hey I want to add a command to my system. I am not using any package-format or anything. I just want to install a script that I wrote.

I know of some ways to do that:

  • add my script into whatever is the first thing in $PATH
  • add a custom path to $PATH (in /etc/profile.d/ or /etc/environment) and put the script into the custom path
  • add my script into /usr/bin or /usr/local/bin

I remember reading that profile.d/ doesn’t get picked up by all shells and that /etc/environment (which exists but is empty on my system) shouldn’t be used.

What should I do to ensure that my command will be available everywhere on any gnu/linux or bsd system?

EDIT: I should clarify that I am asking this only out of curiosity. I just like knowing how this stuff works. The script was just an example (I thought this would make it easier to understand, lol). I am interested in knowing a way to install a command without any chance of that command later not being found by other programs. I find many different answers to this and they all seem a little muddy, like “doing x should usually work”. I want to know the solution that always works

  • folkrav@lemmy.ca
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    1 year ago

    He just told you why not to put it in /usr/bin: it’s where your package manager puts executables.

    I’m not too sure why it’s important where your users put your script from a script author perspective? Otherwise, just check the default $PATH content for a fresh user on said system, and put it somewhere in there.

    • niemand@discuss.tchncs.deOP
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      He just told you why not to put it in /usr/bin: it’s where your package manager puts executables.

      I thought he might tell me why me and my package manager cant both use this directory. The reason for that is not obvious to me

      • chickenf622@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        Cause it’s good to know of something is an installed package at a glance. I also imagine it would reduce the risk of accidentally overwriting your own scripts if the packages happen to have the same name as your local scripts.

      • folkrav@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Other people already answered you, but it’s mostly for:

        1. Keeping things obvious, you know who did what
        2. Avoid potential collisions
      • aperson@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Because in situations like this, segregation is a good thing. You don’t want automated tools futzing in directories that you might have wanted to keep as-is.

      • InverseParallax@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Unix has had a long running convention of separation between “operating system” and other files, so you can blow away something like /opt or /home without making your system unbeatable.

        If you stick stuff under /usr/bin then you have to track the files especially if there are any conflicts.

        Best to just add another path, I use ~/bin because it’s easy to get to and it’s a symlink from the git repo that holds my portable environment, just clone it and run a script and I’m home.

        • andruid@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          And migrate /opt and /home (or even remotely mount) so that user data is preserved outside of the system!

          Both features make system admin much more sane!