So, I’ve written some fairly big code for CentOS 7.1. The code essentially makes use of different command line tools by parsing the text output and shoving it in a database... pretty straight forward.
Now I’m tasked with making this code run on Ubuntu, CentOS 6, and potentially other flavors. The code of concern is written in Python 2.7. What methodologies exist for getting this done?
The method that is staring me in the face is obvious, just switch the commands your running and how you parse them based on what OS your running. But I’m really scared this isn’t a good way and may become unmanageable. I’ve never done something like this, getting code to run on multiple OSs and I’m really hoping someone with experience with this can help me with overall approach to this, what must be a typical, programming challenge.
-
1So most of the work in the python program is not done in python? But vis Popen calls? (or other things from the subprocesses module?)Frames Catherine White– Frames Catherine White05/15/2016 02:06:30Commented May 15, 2016 at 2:06
-
Yeah. We use tons of packages. A lot are hardware dependant because of what we are doing. I'm probably running 100 complex commands just to get positive feedback to the user that the GUI did what it was supposed to in the back-end. To give you a better idea, this is what we made for cent OS 7... now we gotta get it to run on many Linux flavors: bithoarder.comgunslingor– gunslingor05/15/2016 02:41:10Commented May 15, 2016 at 2:41
-
1How good is your testing? If I run your stuff on the wrong OS will something tell me the subprocess call didn't do what was expected? Or is exit status the most you check?candied_orange– candied_orange05/15/2016 04:03:24Commented May 15, 2016 at 4:03
-
It provides error messages yes, that would display to the user. I'm in the process of modifying the install script to not allow installation on an unapproved OS. Hoping I won't have to get to granular with version control, you know, 6.01938.gunslingor– gunslingor05/15/2016 05:09:03Commented May 15, 2016 at 5:09
-
2Coming back to this 8 years later... most tool seem to change across distros. Whether it's the command or the output, or command options, or command name, they change. I think ifconfig is ipconfig in some cases, some linux distros distinguish networking files differently. Things change constantly when trying to make software appliance around tools. In hind site, a better approach is not to use the cli tools instead interface with a lower level... but even that changes. Only real solution is OS adapter code as far as I can tell.gunslingor– gunslingor10/03/2024 17:52:37Commented Oct 3, 2024 at 17:52
2 Answers 2
You can use the platform
library to run different commands based on which OS flavor and/or version it's running under.
I've had a similar, but simpler, situation with a C program running on different linux distros, and using maybe half a dozen command-line tools. What I did was write C functions char *whichpath(char *command), and similarly for locatepath(). The which version just does popen("which command","r") and reads the pipe, simply letting which find the (path to the) command I need. If that fails, I try locate, and if that fails, too, the program can't run and emits an error. Of course, which (and occasionally locate) have to be available on the distro, but they usually are. And if which finds the command, it's already on your PATH, so no problem in the first place. Otherwise, you may have to disambiguate multiple lines returned from locate, and then execute the full locate'ed path. And, of course, the name of the command can't change between distros, but whether it's in /usr/bin/ or elsewheres won't matter. In fact, even the same executable image (compiled with -static libs) can run on different boxes running different distros.
Explore related questions
See similar questions with these tags.