0

So I have a project directory "dataplatform" and its contents are the follows:

 ── dataplatform
 ├── __init__.py
 ├── commons
 │  ├── __init__.py
 │  ├── __pycache__
 │  │  ├── __init__.cpython-38.pyc
 │  │  ├── get_partitions.cpython-38.pyc
 │  │  └── packages.cpython-38.pyc
 │  ├── get_partitions.py
 │  ├── packages.py
 │  ├── pipeline
 │  │  ├── __init__.py
 │  │  └── pipeline.py
 │  └── spark_logging.py
 ├── pipelines
 │  ├── __init__.py
 │  └── batch
 │  ├── ETL.py
 │  ├── ReadMe.rst
 │  └── main.py
 └── requirement.txt

I have two questions here:

  1. In pipelines package, I try to import modules from commons package in the main.py module, by saying from dataplatform.commons import * .However, the IDE(Pycharm) immediately throws up an error saying that it is not permitted,as it cannot find the package dataplatform. However, dataplatform here has init.py , and is therefore a package that has commons as a sub-package. What could be going wrong there? When I replace the above import statement with from commons import * , it works just fine.

  2. Now, the project working directory : When I execute the main.py script from the the dataplatform directory by passing the complete path of the main.py file to the python3 executable, it refuses to execute, with the same import error being thrown as below:

    File "pipelines/batch/main.py", line 2, in from dataplatform.commons import * ModuleNotFoundError: No module named 'dataplatform'

I would like to know as to what must be the root directory (working directory) from which I should try executing the main file (on my local machine) so that the main.py file will execute successfully.

I am keen on keeping the dataplatform package appended to every subpackage name I use in the code, as the environment on which I am running this is Hadoop Sandbox (HDP 3.1) , and for some unknown reasons, appending the dataplatform package name is required to load files from HDFS successfully (The code is zipped and stored on HDFS; call to the main.py executes the whole program correctly somehow).

Note: Using sys.path.append is not an option.

asked Feb 25, 2023 at 20:25

1 Answer 1

0

Do I understand you correctly, that you need from dataplatform.commons import * in main.py for it to work in Hadoop Sandbox? You could set up a PyCharm Project above your dataplatform folder, see my example project structure below. The hidden .idea folder contains the PyCharm project settings.

├── dataplatform
│  ├── commons
│  │  ├── get_partitions.py
│  │  ├── __init__.py
│  │  ├── packages.py
│  │  ├── pipeline
│  │  │  ├── __init__.py
│  │  │  └── pipeline.py
│  │  └── spark_logging.py
│  ├── __init__.py
│  ├── pipelines
│  │  ├── batch
│  │  │  ├── ETL.py
│  │  │  ├── main.py
│  │  │  └── ReadMe.rst
│  │  └── __init__.py
│  └── requirement.txt
└── .idea
 ├── .gitignore
 ├── inspectionProfiles
 │  └── profiles_settings.xml
 ├── misc.xml
 ├── modules.xml
 ├── stackoverflow.iml
 └── workspace.xml

Now you can use a import like from dataplatform.commons import * in main.py. Because PyCharm will append the project folder to sys.path.

Alternatively you can have the PyCharm project directory somewhere else and add the path to the dataplatform folder. File > Settings... > Project: PROJECTNAME > Project Structure ... on the right side you can add folder.

answered Feb 25, 2023 at 21:59
Sign up to request clarification or add additional context in comments.

3 Comments

Ok. Now I do this in VS Code (doesn't add .idea folder).To get running, I need to set sys.path.append(path_until_dataplatform) . However, in HDFS, sys.path makes no sense, and I am able to get this working without any appending any path.I think somehow the equivalent of path append is happening when a YARN container is launched , which is why the file is found. However, coming back to the original point : 1) Why isn't python allowing me to prefix the package name to the name of a sub-package at the time of import just because the former is also the name of my current working directory?
2) Would be great if someone could explain how contents from a zipped folder on HDFS are read by the Spark program, as I do not see any sort on unzipping happening in the YARN AppMaster logs.
You have to set your environment (whether Pycharm, Vscode, commandline, ...) the same way HDFS does it. I have no idea how HDFS works, but you do not have to modiy sys.path in your python code. This can be done in your IDE project configuration.

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.