Skip to main content
Stack Overflow
  1. About
  2. For Teams

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

Module/Package resolution in Python

So I have a project directory "dataplatform" and its contents are the follows:

 ── dataplatform
 ├── __init__.py
 ├── commons
 │  ├── __init__.py
 │  ├── __pycache__
 │  │  ├── __init__.cpython-38.pyc
 │  │  ├── get_partitions.cpython-38.pyc
 │  │  └── packages.cpython-38.pyc
 │  ├── get_partitions.py
 │  ├── packages.py
 │  ├── pipeline
 │  │  ├── __init__.py
 │  │  └── pipeline.py
 │  └── spark_logging.py
 ├── pipelines
 │  ├── __init__.py
 │  └── batch
 │  ├── ETL.py
 │  ├── ReadMe.rst
 │  └── main.py
 └── requirement.txt

I have two questions here:

  1. In pipelines package, I try to import modules from commons package in the main.py module, by saying from dataplatform.commons import * .However, the IDE(Pycharm) immediately throws up an error saying that it is not permitted,as it cannot find the package dataplatform. However, dataplatform here has init.py , and is therefore a package that has commons as a sub-package. What could be going wrong there? When I replace the above import statement with from commons import * , it works just fine.

  2. Now, the project working directory : When I execute the main.py script from the the dataplatform directory by passing the complete path of the main.py file to the python3 executable, it refuses to execute, with the same import error being thrown as below:

    File "pipelines/batch/main.py", line 2, in from dataplatform.commons import * ModuleNotFoundError: No module named 'dataplatform'

I would like to know as to what must be the root directory (working directory) from which I should try executing the main file (on my local machine) so that the main.py file will execute successfully.

I am keen on keeping the dataplatform package appended to every subpackage name I use in the code, as the environment on which I am running this is Hadoop Sandbox (HDP 3.1) , and for some unknown reasons, appending the dataplatform package name is required to load files from HDFS successfully (The code is zipped and stored on HDFS; call to the main.py executes the whole program correctly somehow).

Note: Using sys.path.append is not an option.

Answer*

Draft saved
Draft discarded
Cancel
3
  • Ok. Now I do this in VS Code (doesn't add .idea folder).To get running, I need to set sys.path.append(path_until_dataplatform) . However, in HDFS, sys.path makes no sense, and I am able to get this working without any appending any path.I think somehow the equivalent of path append is happening when a YARN container is launched , which is why the file is found. However, coming back to the original point : 1) Why isn't python allowing me to prefix the package name to the name of a sub-package at the time of import just because the former is also the name of my current working directory? Commented Feb 26, 2023 at 14:58
  • 2) Would be great if someone could explain how contents from a zipped folder on HDFS are read by the Spark program, as I do not see any sort on unzipping happening in the YARN AppMaster logs. Commented Feb 26, 2023 at 15:07
  • You have to set your environment (whether Pycharm, Vscode, commandline, ...) the same way HDFS does it. I have no idea how HDFS works, but you do not have to modiy sys.path in your python code. This can be done in your IDE project configuration. Commented Feb 26, 2023 at 16:16

lang-py

AltStyle によって変換されたページ (->オリジナル) /