Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 9cd87ed

Browse files
Merge pull request avinashkranjan#1753 from Abhinavcode13/master
ADDED travel_destinations_scraper.py
2 parents 19ec3d1 + 6ce17d9 commit 9cd87ed

File tree

2 files changed

+60
-0
lines changed

2 files changed

+60
-0
lines changed
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# Travel destination scraper
2+
3+
The script provided is a travel destination scraper. It is designed to extract information about travel destinations from a specified webpage using web scraping techniques.
4+
5+
6+
## Instructions
7+
8+
-Make sure you have Python installed on your system. You can download and install Python from the official Python website (https://www.python.org).
9+
10+
-Open a text editor and copy the script provided in the previous response into a new file.
11+
12+
-Save the file with a .py extension, such as travel_destinations_scraper.py or destination_scraper_script.py.
13+
14+
-Open a terminal or command prompt and navigate to the directory where you saved the Python script.
15+
16+
-Run the script by typing python travel_destinations_scraper.py (replace travel_destinations_scraper.py with the actual name of your script if you used a different name).
17+
18+
-The script will scrape the travel destinations from the specified webpage and print the destination information to the console.
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
import requests
2+
from bs4 import BeautifulSoup
3+
4+
5+
def scrape_travel_destinations():
6+
url = "https://www.example.com/destinations" # Replace with the actual URL of the travel destinations page
7+
response = requests.get(url)
8+
soup = BeautifulSoup(response.text, "html.parser")
9+
10+
destinations = []
11+
12+
# Find the HTML elements containing the destination information
13+
destination_elements = soup.find_all("div", class_="destination")
14+
15+
for element in destination_elements:
16+
# Extract the desired information from each destination element
17+
name = element.find("h2").text.strip()
18+
description = element.find("p").text.strip()
19+
image_url = element.find("img")["src"]
20+
21+
# Create a dictionary for each destination
22+
destination = {
23+
"name": name,
24+
"description": description,
25+
"image_url": image_url
26+
}
27+
28+
# Append the destination dictionary to the list
29+
destinations.append(destination)
30+
31+
return destinations
32+
33+
34+
# Call the function to scrape travel destinations
35+
travel_destinations = scrape_travel_destinations()
36+
37+
# Print the scraped destinations
38+
for destination in travel_destinations:
39+
print("Destination: ", destination["name"])
40+
print("Description: ", destination["description"])
41+
print("Image URL: ", destination["image_url"])
42+
print("------------------------")

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /