Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
8
votes
2
answers
1962
views
Unreal Engine games supporting Linux
Unreal Engine and other engines support making games for multiple platforms like iOS, Android, Windows, MacOS and Linux. The games are distributed as pre-compiled executables and they all need specific builds targeted to a particular OS. With the number of different Linux distributions, do there nee...
Unreal Engine and other engines support making games for multiple platforms like iOS, Android, Windows, MacOS and Linux. The games are distributed as pre-compiled executables and they all need specific builds targeted to a particular OS.
With the number of different Linux distributions, do there need to be different builds for Ubuntu, Arch etc?
How can developers ensure that as many Linux distributions as possible are supported?
dubious
(191 rep)
Apr 20, 2025, 10:45 AM
• Last activity: Apr 20, 2025, 09:44 PM
1
votes
2
answers
96
views
Django backend died when deploying a new version: Address already in use
I have a Django app running on a Debian server, with a simple deploy script that is automatically invoked via a GitHub webhook: ``` #!/bin/sh git pull /home/criticalnotes/.local/bin/poetry install --with prod --sync /home/criticalnotes/.local/bin/poetry run ./manage.py migrate sudo /usr/sbin/service...
I have a Django app running on a Debian server, with a simple deploy script that is automatically invoked via a GitHub webhook:
#!/bin/sh
git pull
/home/criticalnotes/.local/bin/poetry install --with prod --sync
/home/criticalnotes/.local/bin/poetry run ./manage.py migrate
sudo /usr/sbin/service api.critical-notes.com restart
echo "service api.critical-notes.com restarted"
So this deploy script pulls the latest code from git, installs the dependencies, runs the migrate script, and then restarts the service.
My api.critical-notes.com.service
file:
[Unit]
Description=api.critical-notes.com
[Service]
User=criticalnotes
Group=criticalnotes
Restart=on-failure
WorkingDirectory=/home/criticalnotes/api.critical-notes.com
ExecStart=/home/criticalnotes/.local/bin/poetry run uvicorn criticalnotes.asgi:application --log-level warning --workers 8 --uds /tmp/uvicorn.sock
[Install]
WantedBy=multi-user.target
This setup has worked perfectly fine for a pretty long time, but today when I pushed new code to GitHub the site stopped working. After looking into what was going on, I noticed that the api.critical-notes.com
service wasn't running any more, it kept dying with failures, and when trying to start the backend manually I got this error:
Address already in use
I have no idea what caused this problem, especially since this has never happened before and I have not made any deploy or setup changes today (or any time recently). So my question is how this could have happened - how could the address still be in use when the backend was not running? And secondly, how can I improve my deploy script so this can't happen again?
It was quite scary to have to site be offline because the backend was offline, for about 30 minutes while I was trying to figure out why my code push broke things. I don't want that to happen again 😅
Kevin Renskers
(61 rep)
Aug 5, 2023, 09:33 PM
• Last activity: Aug 23, 2023, 09:46 AM
0
votes
0
answers
65
views
Imaging a Linux Lab
I am running Xubuntu on my computers in one of my labs. What is the best way to image my master image out to approximately 20 computers? Right now I am just using clonezilla and physically removing the drives to a imaging PC. Looking for a more efficient way. Thanks
I am running Xubuntu on my computers in one of my labs. What is the best way to image my master image out to approximately 20 computers? Right now I am just using clonezilla and physically removing the drives to a imaging PC. Looking for a more efficient way. Thanks
SilverSurfer
(39 rep)
May 4, 2022, 01:18 PM
2
votes
2
answers
9910
views
How to install Nitrux OS using znx?
I want to check out [Nitrux][1], which can be "deployed" using [`znx`][2] [(how-to here)][3]. `znx` does not seem to open correctly in Fedora, so I would like to deploy it using the terminal. The how-to says that I should install it on a partition with at least 4 GB. I have one big free partition on...
I want to check out Nitrux , which can be "deployed" using
znx
(how-to here) .
znx
does not seem to open correctly in Fedora, so I would like to deploy it using the terminal. The how-to says that I should install it on a partition with at least 4 GB.
I have one big free partition on my computer, about 200 GB. Do you know if Nitrux will use the entire partition if I deploy it on that partition? Or will znx
assign 4 GB of that partition and let me use the rest?
User12547645
(183 rep)
Dec 9, 2018, 10:12 PM
• Last activity: Jan 5, 2022, 01:11 PM
0
votes
1
answers
646
views
Bash script to restart java application after Jenkins compiling
I'm noob in linux! I have my server and installed jenkins. I need to create bash script, which should run application(or will restart if it has already been started) after jenkins compile it. I tried to use screen util in linux, but it's not working for me. I wrote this script: ``` screen -X -S Java...
I'm noob in linux! I have my server and installed jenkins. I need to create bash script, which should run application(or will restart if it has already been started) after jenkins compile it. I tried to use screen util in linux, but it's not working for me. I wrote this script:
screen -X -S JavaTelegramBot quit
screen -d -m -S JavaTelegramBot
screen -X -S JavaTelegramBot java -jar "path/to/jar"
When I tip -ls
, it's empty, so application not working. I even tried to use nohup. It's only launch application for few seconds, until jenkins finished his build. Scripts starting by using cmd command in jenkins after build
Alex F
(3 rep)
Aug 28, 2021, 07:43 PM
• Last activity: Aug 30, 2021, 04:19 PM
1
votes
1
answers
1829
views
Temporarily disable user?
We use continuous deployment to deploy changes to our production host. At the same time we would like to prevent access to the deployment account, except during specific deployment windows. At the moment the solution we are looking at is simply renaming the .ssh folder. Is there another approach tha...
We use continuous deployment to deploy changes to our production host. At the same time we would like to prevent access to the deployment account, except during specific deployment windows. At the moment the solution we are looking at is simply renaming the .ssh folder.
Is there another approach that could be used or another approach to limit deployments outside of allocated windows?
This is for Ubuntu.
Andre M
(503 rep)
Feb 9, 2018, 03:08 PM
• Last activity: Aug 30, 2021, 02:38 PM
0
votes
1
answers
507
views
Deploying applications in old distributions compiled in the new distribution
I have a binary built on `ubuntu 20` (in a docker container FROM ubuntu:latest) What if I want to run this binary, for example, in `ubuntu 16`? I know that it is possible to face the fact that the version `libc.so` in ubuntu 16 is not compatible with `libc.so` in ubuntu 20, the binary was linked to,...
I have a binary built on
ubuntu 20
(in a docker container FROM ubuntu:latest)
What if I want to run this binary, for example, in ubuntu 16
?
I know that it is possible to face the fact that the version libc.so
in ubuntu 16 is not compatible with libc.so
in ubuntu 20, the binary was linked to, and I will get an error at runtime something like GLIBC symbols is not found
.
What is the best practices in such cases? Static linkage with sys libs and building on each target platform are not considered.
How about providing all the system shared libraries (libc
, libpthread
, ld.so
and so on) from the build platform to the target platform?
And my application running in ubuntu 16 will use libc
from ubuntu 20, with which it was linked. (I will specify the path to the desired libc
via LD_LIBRARY_PATH
, for example)
What problems can I face with this approach?
ibse
(371 rep)
Jul 10, 2021, 09:17 AM
• Last activity: Jul 11, 2021, 03:15 AM
0
votes
0
answers
15
views
How to name a newer version of command, which is not fully compatible with old version
I have two versions of the same command. The newer version is not fully backward compatible with the old version. Should I add version number to the command name in this case like so /usr/bin/foo /usr/bin/foo-2 Where foo-2 is the new version (2.0), or is there some other deployment strategy? This al...
I have two versions of the same command. The newer version is not fully backward compatible with the old version. Should I add version number to the command name in this case like so
/usr/bin/foo
/usr/bin/foo-2
Where foo-2 is the new version (2.0), or is there some other deployment strategy? This alternative would break if the old version becomes obsolte, and foo now actually is foo-2. How to deal with that.
user877329
(761 rep)
May 2, 2021, 09:55 AM
0
votes
1
answers
18
views
What are the existing open source tools to develop on-premise organizational app store on linux?
We have a Linux cluster in our organization and my data science team is developing a number of ML projects to be utilized by teams across the organization. To enable the teams to access the ML models, the idea is to create apps and add them to an app store. So that any of the internal teams, can reg...
We have a Linux cluster in our organization and my data science team is developing a number of ML projects to be utilized by teams across the organization. To enable the teams to access the ML models, the idea is to create apps and add them to an app store. So that any of the internal teams, can register to use the app and use it for their project.
Are there existing open-source tools to achieve this objective?
kosmos
(101 rep)
Apr 10, 2021, 08:53 AM
• Last activity: Apr 10, 2021, 12:20 PM
1
votes
1
answers
2725
views
How to install apt packages into mounted system image (img file)
I need to edit/prepare debian-based Raspbian system image for multiple Raspberry Pi devices. Until now, my modifications consisted in adding or changing existing config files. I wrote script like this (to mount partitions from img file): IMGFILE='edited-raspbian.img' MNTDIR='/mnt/'$IMGFILE'/' LOOPDE...
I need to edit/prepare debian-based Raspbian system image for multiple Raspberry Pi devices.
Until now, my modifications consisted in adding or changing existing config files.
I wrote script like this (to mount partitions from img file):
IMGFILE='edited-raspbian.img'
MNTDIR='/mnt/'$IMGFILE'/'
LOOPDEVICE=$(sudo losetup -f)
sudo losetup -P $LOOPDEVICE $IMGFILE
PARTITIONS=$(sudo fdisk -l $LOOPDEVICE | grep $LOOPDEVICE'*p' | cut -d$' ' -f 1 | cut -d$'/' -f 3)
while IFS= read -r PARTITION; do
MNTDIRPART=$MNTDIR'/'${PARTITION: -2}
sudo mkdir -p $MNTDIRPART
sudo mount "/dev/$PARTITION" "$MNTDIRPART"
done <<< "$PARTITIONS"
After I run it I see and edit '/' and '/boot' partitions from image in directories:
/mnt/edited-raspbian.img/p1
/mnt/edited-raspbian.img/p2
---
My question is:
How can I install apt packages "into image"?
Can I just chroot to directory, where image
/
partition is mounted and run apt install
?
To simplify everything I can work on Raspberry Pi with Raspbian (normally I'm editing these images on latest Debian).
Kamil
(799 rep)
Jan 9, 2021, 04:54 PM
• Last activity: Jan 9, 2021, 07:11 PM
1
votes
0
answers
468
views
Debian packages for continuous deployment
I would like to use Debian packages to deploy software (a web application) to a Debian-based server. For reasons beyond the scope of this question we cannot use Docker (or a PaaS such as Heroku) to avoid this problem altogether. The setup is pretty simple, we have a Git repository with some source c...
I would like to use Debian packages to deploy software (a web application) to a Debian-based server. For reasons beyond the scope of this question we cannot use Docker (or a PaaS such as Heroku) to avoid this problem altogether.
The setup is pretty simple, we have a Git repository with some source code. We work on branches and run the code locally (no Debian-specific stuff here, in fact development is done on various & different operating systems); when we're happy we commit to a branch, wait for CI tests to run, review & merge to master.
Upon a merge to master happens we want to deploy this to a Debian server. Currently this is done "manually" via a shell script that moves files in the right place and restarts the right services. This is fragile for many reasons (the script might fail in the middle and leave the production server in an inconsistent state) and I'd like something better.
Debian packages sound like a good solution. They have built-in handling for many of the things we currently do manually via the deployment script such as copying files, (re)starting systemd services, etc. Furthermore our CI/CD system (Azure DevOps) has the concept of _artifacts_ which are stored and can be redeployed manually at any time, so it fits well with the idea of a
.deb
that contains the entire application.
The problems I see:
* Debian insists on the concept of releases & versions. In our case we don't have versions and don't want them; all we care about is to be able to deploy whatever the current master
branch is - the .deb
file itself will be copied and dpkg
'd through a shell script over SSH so versioning for the purpose of maintaining an APT repository is irrelevant.
* Debian insists on changelogs. We don't want to maintain those manually; gbp dch --ignore-branch -S
generates one for us but it squashes all the commits into a single "change", and furthermore still doesn't solve the version problem.
* The majority of the tooling & documentation assumes I have some sort of versioned source tarball and the "Debian" part of it is its own repo. Not only do I not have a concept of versions but the application's source and the Debian packaging tooling for it is in the same repo.
Questions:
* Is this a good idea to begin with?
* How do I completely ignore/work around the concept of versions and changelogs?
* Can I still preserve the install vs upgrade distinction for packages? My application has different logic for whether we're upgrading a different installation or doing a new installation from scratch (for example, on a new installation the application will depend on a configuration file that needs to be created manually, so starting the systemd service automatically is a no-go, however during an upgrade we assume the configuration file is already there so if the service is already started we do want to restart it).
André Borie
(575 rep)
Oct 3, 2020, 06:19 PM
• Last activity: Oct 3, 2020, 06:27 PM
0
votes
3
answers
4405
views
Run application built on Ubuntu on CentOS
Is there a way to deploy an application, that you built on Ubuntu, on CentOS? And if so, how? EDIT: Sorry, I should have given some more info. It's built on Ubuntu 15.10 with Qt 5.6, and besides Qt it has only one other dependency which is statically linked. I'm not sure if its dependencies are stat...
Is there a way to deploy an application, that you built on Ubuntu, on CentOS? And if so, how?
EDIT: Sorry, I should have given some more info.
It's built on Ubuntu 15.10 with Qt 5.6, and besides Qt it has only one other dependency which is statically linked. I'm not sure if its dependencies are statically linked as well, but it's using some boost libs and nothing Ubuntu specific so I'm guessing that shouldn't be a problem.
rndm
(11 rep)
Jul 23, 2016, 12:17 PM
• Last activity: Sep 21, 2020, 06:05 AM
3
votes
1
answers
1627
views
printf -v is an illegal option, in Bitbucket pipeline. And a question about <<
Hello I'm brand new to shell scripting, so sorry if this is trivial. How do you use `printf` command with the `-v` option? In our `deployment.sh` file we have this line `printf -v BITBUCKET_COMMIT_str %q "$BITBUCKET_COMMIT"` ``` echo 'Initializing new deployment' printf -v BITBUCKET_COMMIT_str %q "$...
Hello I'm brand new to shell scripting, so sorry if this is trivial.
How do you use
printf
command with the -v
option?
In our deployment.sh
file we have this line
printf -v BITBUCKET_COMMIT_str %q "$BITBUCKET_COMMIT"
echo 'Initializing new deployment'
printf -v BITBUCKET_COMMIT_str %q "$BITBUCKET_COMMIT"
echo "commit string $BITBUCKET_COMMIT_str"
When the Bitbucket pipeline runs, it always fails at this line. With the error printf -v is an illegal option
.
Error from Bitbucket
Initializing new deployment
commit string
deployment.sh: 28: printf: Illegal option -v
bash: -c: line 1: syntax error: unexpected end of file
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! XXXXX deploy: sh deployment.sh
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the XXXX deploy script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/XXX-debug.log
I've tested the printf -v
command in a local terminal and it works fine, so I don't know why it doesn't work in the Bitbucket build pipeline. I've tried replacing printf
with just a variable, but then I get a syntax error later on in the file. I think that's because we try to use the variable as in input for another command
```
"bash -s $BITBUCKET_COMMIT_str" > The code inside the
HERE block seems to run. But from what I understand about the
<<` that change doesn't really make sense.
Any help would be greatly appreciated. Most of my team is on holiday so I'm having a tough time with this. Thanks in advance :)
adam.k
(141 rep)
Jul 7, 2020, 02:13 PM
• Last activity: Jul 8, 2020, 08:49 AM
1
votes
2
answers
153
views
Why use Docker images in production when already have VMs?
I am doing research on which is the most efficient way for serving a production app from a cloud hosting provider like DigitalOcean using linux as server platform. DigitalOcean provides VMs - called droplets. Common sense would be that the best way is to just deploy the app as is in the VM (droplet)...
I am doing research on which is the most efficient way for serving a production app from a cloud hosting provider like DigitalOcean using linux as server platform. DigitalOcean provides VMs - called droplets. Common sense would be that the best way is to just deploy the app as is in the VM (droplet) provided. Docker containers surely bring an overhead that translates to slower response from the deployed app. I am not aware of any optimizations done by the cloud hosting providers to the docker containers running inside VMs.
Considering that the VMs can be configured with scripts to be identical as the Docker containers, does it make any sense to use Docker in production?
vladimir.gorea
(143 rep)
Jun 28, 2020, 08:24 AM
• Last activity: Jun 28, 2020, 01:30 PM
12
votes
4
answers
6882
views
Linux Bulk/Remote Administration
Beside our internal IT infrastructure, we've got around 500 Linux machines hosting our services for the on-line world. They are grouped in a bunch of clusters like Database A-n, Product A-n, NFS, Backoffice and so on. Furthermore, they are administered by an external provider, according to our speci...
Beside our internal IT infrastructure, we've got around 500 Linux machines hosting our services for the on-line world. They are grouped in a bunch of clusters like Database A-n, Product A-n, NFS, Backoffice and so on. Furthermore, they are administered by an external provider, according to our specifications and requirements.
However, we face a lot of trouble during (web-) software development, roll-out and deploy - especially because the dev- and staging-environments have almost nothing in common with the live systems (I spare out the nasty details..).
Thus, I've tried to create virtual machines, copied the various live-systems as exactly as possible and prepared them to connect to e.g. the development-databases instead of the "real" ones transparently for developers (they aren't
root
). This works pretty well, but...
I was wondering how one could administer those systems remotely and _in bulk_? Is there some software family I'm not aware of? Or, at least, some techniques or principles one should be familiar with?
We would provide every developer with a bunch of images to be run locally (VirtualBox). The QA dept. would get virtual clusters (XEN or Hyper-V). If I need to provide an additional server-module, re-route a new database connection or just want to update everything provided by the package manager... how could I possibly do that without being forced to log on to every system and/or ask my colleagues to download and run some fixture-script?
I believe there are plenty of solutions. Well, somehow I'm too stupid to enter the correct keywords into the search engines... Or isn't this issue as trivial as it sounds?
For the record:
- Almost all systems are running Debian GNU/Linux 6.x "squeeze"
- No developer is forced to use a particular OS at his/her workstation
- The budget is limited, of course, but not too small to buy proprietary software
- A solution that would involve our aforementioned provider is preferred
mjhennig
(263 rep)
Jun 9, 2012, 10:59 PM
• Last activity: Jun 15, 2020, 07:41 PM
1
votes
1
answers
69
views
What set of system-wide configuration settings can conflict when installing different applications on a unix-based OS?
Reading this [report on containerization][1] the authors mention that: > A problem caused by Unix’s shared global filesystem is the lack of configuration isolation. Multiple applications can have conflicting requirements for system-wide configuration settings. I know that installing different applic...
Reading this report on containerization the authors mention that:
> A problem caused by Unix’s shared global filesystem is
the lack of configuration isolation. Multiple applications can
have conflicting requirements for system-wide configuration
settings.
I know that installing different applications for development, frequently different applications require different versions of the same libraries, or vaguely, that they require different system-wide configuration values. It is not clear to me what this set is on a unix-based OS.
What configuration values commonly cause this issue?
Naively, why is this not as much an issue once the application is deployed or installed by the end-user? Why is it that I cannot have the development build of similar applications running easily side by side, but I can download with a package manager and have them run with little or no problems?
Anthony O
(111 rep)
Dec 21, 2019, 06:02 PM
• Last activity: Dec 22, 2019, 11:23 AM
1
votes
1
answers
139
views
Rolling upgrade/deployment for wine?
While I am using wine to run some Windows exe program files on Lubuntu 18.04, I update and upgrade which probably have updated wine. While I am still running the Windows exe programs, I try to run another Windows exe program, $ wine another.exe wine client error:0: version mismatch 547/571. Your win...
While I am using wine to run some Windows exe program files on Lubuntu 18.04, I update and upgrade which probably have updated wine.
While I am still running the Windows exe programs, I try to run another Windows exe program,
$ wine another.exe
wine client error:0: version mismatch 547/571.
Your wineserver binary was not upgraded correctly,
or you have an older one somewhere in your PATH.
Or maybe the wrong wineserver is still running?
I don't want to exit the running Windows exe programs. Does that mean I shouldn't kill the running wine processes?
What can I do to start the other window exe program?
Is this a common problem in deployment: rolling upgrade/deployment?
Thanks.
Tim
(106420 rep)
Dec 12, 2019, 10:35 PM
• Last activity: Dec 14, 2019, 09:45 AM
0
votes
2
answers
46
views
Deployment systems for Linux
My project is based on a Raspberry Pi distro - Raspbian. But I made changes to it. There are a lot of them: changes in config.txt, adding system services, installing new packages, change the image of the splash screen, etc. But every time when I need a new, "fresh" system I need to repeat all these...
My project is based on a Raspberry Pi distro - Raspbian. But I made changes to it. There are a lot of them: changes in config.txt, adding system services, installing new packages, change the image of the splash screen, etc. But every time when I need a new, "fresh" system I need to repeat all these changes. Are there some automated tools to perform this? Also, I want to supply these changes to my potential users, to make deployment of my changes easy on their side.
artsin
(101 rep)
Oct 22, 2019, 09:40 AM
• Last activity: Oct 22, 2019, 11:24 AM
0
votes
2
answers
1385
views
Recommendations for editing same file on different machines
Our Rails app is scaled at multiple machines, from time to time, we need to change settings at `production.yml` , right now we have to ssh into each server and do the editing at each machine individually. What's the right way to handle this case?
Our Rails app is scaled at multiple machines, from time to time, we need to change settings at
production.yml
, right now we have to ssh into each server and do the editing at each machine individually.
What's the right way to handle this case?
simo
(125 rep)
Jan 29, 2019, 02:05 PM
• Last activity: Jan 29, 2019, 04:27 PM
0
votes
1
answers
397
views
What's the difference between CMs "push" method (Ansible) to "pull" method (Chef/Puppet)?
I know that some of the advantages of Ansible over many other CMs are these: 1. Ansible's scripts being written in YAML, a simple serialization language. 1. The fact that one doesn't have to install it on the machines you deploy its commands/playbooks. 1. Ansible's strong user base and community (fo...
I know that some of the advantages of Ansible over many other CMs are these:
1. Ansible's scripts being written in YAML, a simple serialization language.
1. The fact that one doesn't have to install it on the machines you deploy its commands/playbooks.
1. Ansible's strong user base and community (for example, galaxy-roles)
I know there is another bold different, using the "push" method" instead of some other CMs using the "pull" method.
What is the difference here? Maybe it reflects difference 2?
user149572
Dec 26, 2018, 06:15 AM
• Last activity: Dec 26, 2018, 07:06 PM
Showing page 1 of 20 total questions