Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 2c65e5d

Browse files
committed
Test.
1 parent e61e5d0 commit 2c65e5d

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

‎docs/README.md‎

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,6 @@ That being said, parsing *wikitext* does come with its own challenges:
4747

4848
- **Differences from the live web page**: The *wikitext* content you parse is not always identical to what you see on the live web page.
4949

50-
5150
## The Story Behind This Tutorial
5251

5352
Last year, I set up a system of flashcards for learning German, which involved scraping German Wiktionary pages to retrieve word inflections using BeautifulSoup. My code worked well initially, but by the time I wanted to use it again, it was broken because the website layout had changed. So, I fixed the code, but when the layout changed again, I realized that web scraping was not the best approach for the task at hand.

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /