Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit b1479a1

Browse files
author
Umer Farooq
authored
Add files via upload
1 parent 8400e4c commit b1479a1

File tree

2 files changed

+973
-0
lines changed

2 files changed

+973
-0
lines changed

‎Assignment+1.ipynb

Lines changed: 280 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,280 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"---\n",
8+
"\n",
9+
"_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._\n",
10+
"\n",
11+
"---"
12+
]
13+
},
14+
{
15+
"cell_type": "markdown",
16+
"metadata": {},
17+
"source": [
18+
"# Assignment 1\n",
19+
"\n",
20+
"In this assignment, you'll be working with messy medical data and using regex to extract relevant infromation from the data. \n",
21+
"\n",
22+
"Each line of the `dates.txt` file corresponds to a medical note. Each note has a date that needs to be extracted, but each date is encoded in one of many formats.\n",
23+
"\n",
24+
"The goal of this assignment is to correctly identify all of the different date variants encoded in this dataset and to properly normalize and sort the dates. \n",
25+
"\n",
26+
"Here is a list of some of the variants you might encounter in this dataset:\n",
27+
"* 04/20/2009; 04/20/09; 4/20/09; 4/3/09\n",
28+
"* Mar-20-2009; Mar 20, 2009; March 20, 2009; Mar. 20, 2009; Mar 20 2009;\n",
29+
"* 20 Mar 2009; 20 March 2009; 20 Mar. 2009; 20 March, 2009\n",
30+
"* Mar 20th, 2009; Mar 21st, 2009; Mar 22nd, 2009\n",
31+
"* Feb 2009; Sep 2009; Oct 2010\n",
32+
"* 6/2008; 12/2009\n",
33+
"* 2009; 2010\n",
34+
"\n",
35+
"Once you have extracted these date patterns from the text, the next step is to sort them in ascending chronological order accoring to the following rules:\n",
36+
"* Assume all dates in xx/xx/xx format are mm/dd/yy\n",
37+
"* Assume all dates where year is encoded in only two digits are years from the 1900's (e.g. 1/5/89 is January 5th, 1989)\n",
38+
"* If the day is missing (e.g. 9/2009), assume it is the first day of the month (e.g. September 1, 2009).\n",
39+
"* If the month is missing (e.g. 2010), assume it is the first of January of that year (e.g. January 1, 2010).\n",
40+
"\n",
41+
"With these rules in mind, find the correct date in each note and return a pandas Series in chronological order of the original Series' indices.\n",
42+
"\n",
43+
"For example if the original series was this:\n",
44+
"\n",
45+
" 0 1999\n",
46+
" 1 2010\n",
47+
" 2 1978\n",
48+
" 3 2015\n",
49+
" 4 1985\n",
50+
"\n",
51+
"Your function should return this:\n",
52+
"\n",
53+
" 0 2\n",
54+
" 1 4\n",
55+
" 2 0\n",
56+
" 3 1\n",
57+
" 4 3\n",
58+
"\n",
59+
"Your score will be calculated using [Kendall's tau](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient), a correlation measure for ordinal data.\n",
60+
"\n",
61+
"*This function should return a Series of length 500 and dtype int.*"
62+
]
63+
},
64+
{
65+
"cell_type": "code",
66+
"execution_count": 1,
67+
"metadata": {},
68+
"outputs": [
69+
{
70+
"data": {
71+
"text/plain": [
72+
"0 03/25/93 Total time of visit (in minutes):\\n\n",
73+
"1 6/18/85 Primary Care Doctor:\\n\n",
74+
"2 sshe plans to move as of 7/8/71 In-Home Servic...\n",
75+
"3 7 on 9/27/75 Audit C Score Current:\\n\n",
76+
"4 2/6/96 sleep studyPain Treatment Pain Level (N...\n",
77+
"5 .Per 7/06/79 Movement D/O note:\\n\n",
78+
"6 4, 5/18/78 Patient's thoughts about current su...\n",
79+
"7 10/24/89 CPT Code: 90801 - Psychiatric Diagnos...\n",
80+
"8 3/7/86 SOS-10 Total Score:\\n\n",
81+
"9 (4/10/71)Score-1Audit C Score Current:\\n\n",
82+
"dtype: object"
83+
]
84+
},
85+
"execution_count": 1,
86+
"metadata": {},
87+
"output_type": "execute_result"
88+
}
89+
],
90+
"source": [
91+
"import pandas as pd\n",
92+
"\n",
93+
"doc = []\n",
94+
"with open('dates.txt') as file:\n",
95+
" for line in file:\n",
96+
" doc.append(line)\n",
97+
"\n",
98+
"df = pd.Series(doc)\n",
99+
"df.head(10)"
100+
]
101+
},
102+
{
103+
"cell_type": "code",
104+
"execution_count": 2,
105+
"metadata": {},
106+
"outputs": [],
107+
"source": [
108+
"def date_sorter():\n",
109+
" \n",
110+
" # Your code here\n",
111+
" # Full date\n",
112+
" global df\n",
113+
" dates_extracted = df.str.extractall(r'(?P<origin>(?P<month>\\d?\\d)[/|-](?P<day>\\d?\\d)[/|-](?P<year>\\d{4}))')\n",
114+
" index_left = ~df.index.isin([x[0] for x in dates_extracted.index])\n",
115+
" dates_extracted = dates_extracted.append(df[index_left].str.extractall(r'(?P<origin>(?P<month>\\d?\\d)[/|-](?P<day>([0-2]?[0-9])|([3][01]))[/|-](?P<year>\\d{2}))'))\n",
116+
" index_left = ~df.index.isin([x[0] for x in dates_extracted.index])\n",
117+
" del dates_extracted[3]\n",
118+
" del dates_extracted[4]\n",
119+
" dates_extracted = dates_extracted.append(df[index_left].str.extractall(r'(?P<origin>(?P<day>\\d?\\d) ?(?P<month>[a-zA-Z]{3,})\\.?,? (?P<year>\\d{4}))'))\n",
120+
" index_left = ~df.index.isin([x[0] for x in dates_extracted.index])\n",
121+
" dates_extracted = dates_extracted.append(df[index_left].str.extractall(r'(?P<origin>(?P<month>[a-zA-Z]{3,})\\.?-? ?(?P<day>\\d\\d?)(th|nd|st)?,?-? ?(?P<year>\\d{4}))'))\n",
122+
" del dates_extracted[3]\n",
123+
" index_left = ~df.index.isin([x[0] for x in dates_extracted.index])\n",
124+
"\n",
125+
" # Without day\n",
126+
" dates_without_day = df[index_left].str.extractall('(?P<origin>(?P<month>[A-Z][a-z]{2,}),?\\.? (?P<year>\\d{4}))')\n",
127+
" dates_without_day = dates_without_day.append(df[index_left].str.extractall(r'(?P<origin>(?P<month>\\d\\d?)/(?P<year>\\d{4}))'))\n",
128+
" dates_without_day['day'] = 1\n",
129+
" dates_extracted = dates_extracted.append(dates_without_day)\n",
130+
" index_left = ~df.index.isin([x[0] for x in dates_extracted.index])\n",
131+
"\n",
132+
" # Only year\n",
133+
" dates_only_year = df[index_left].str.extractall(r'(?P<origin>(?P<year>\\d{4}))')\n",
134+
" dates_only_year['day'] = 1\n",
135+
" dates_only_year['month'] = 1\n",
136+
" dates_extracted = dates_extracted.append(dates_only_year)\n",
137+
" index_left = ~df.index.isin([x[0] for x in dates_extracted.index])\n",
138+
"\n",
139+
" # Year\n",
140+
" dates_extracted['year'] = dates_extracted['year'].apply(lambda x: '19' + x if len(x) == 2 else x)\n",
141+
" dates_extracted['year'] = dates_extracted['year'].apply(lambda x: str(x))\n",
142+
"\n",
143+
" # Month\n",
144+
" dates_extracted['month'] = dates_extracted['month'].apply(lambda x: x[1:] if type(x) is str and x.startswith('0') else x)\n",
145+
" month_dict = dict({'September': 9, 'Mar': 3, 'November': 11, 'Jul': 7, 'January': 1, 'December': 12,\n",
146+
" 'Feb': 2, 'May': 5, 'Aug': 8, 'Jun': 6, 'Sep': 9, 'Oct': 10, 'June': 6, 'March': 3,\n",
147+
" 'February': 2, 'Dec': 12, 'Apr': 4, 'Jan': 1, 'Janaury': 1,'August': 8, 'October': 10,\n",
148+
" 'July': 7, 'Since': 1, 'Nov': 11, 'April': 4, 'Decemeber': 12, 'Age': 8})\n",
149+
" dates_extracted.replace({\"month\": month_dict}, inplace=True)\n",
150+
" dates_extracted['month'] = dates_extracted['month'].apply(lambda x: str(x))\n",
151+
"\n",
152+
" # Day\n",
153+
" dates_extracted['day'] = dates_extracted['day'].apply(lambda x: str(x))\n",
154+
"\n",
155+
" # Cleaned date\n",
156+
" dates_extracted['date'] = dates_extracted['month'] + '/' + dates_extracted['day'] + '/' + dates_extracted['year']\n",
157+
" dates_extracted['date'] = pd.to_datetime(dates_extracted['date'])\n",
158+
"\n",
159+
" dates_extracted.sort_values(by='date', inplace=True)\n",
160+
" df1 = pd.Series(list(dates_extracted.index.labels[0]))\n",
161+
" \n",
162+
" return df1# Your answer here"
163+
]
164+
},
165+
{
166+
"cell_type": "code",
167+
"execution_count": 3,
168+
"metadata": {},
169+
"outputs": [
170+
{
171+
"name": "stdout",
172+
"output_type": "stream",
173+
"text": [
174+
"0 9\n",
175+
"1 84\n",
176+
"2 2\n",
177+
"3 53\n",
178+
"4 28\n",
179+
"5 474\n",
180+
"6 153\n",
181+
"7 13\n",
182+
"8 129\n",
183+
"9 98\n",
184+
"10 111\n",
185+
"11 225\n",
186+
"12 31\n",
187+
"13 171\n",
188+
"14 191\n",
189+
"15 486\n",
190+
"16 335\n",
191+
"17 415\n",
192+
"18 36\n",
193+
"19 405\n",
194+
"20 323\n",
195+
"21 422\n",
196+
"22 375\n",
197+
"23 380\n",
198+
"24 345\n",
199+
"25 57\n",
200+
"26 481\n",
201+
"27 436\n",
202+
"28 104\n",
203+
"29 299\n",
204+
" ... \n",
205+
"470 220\n",
206+
"471 243\n",
207+
"472 208\n",
208+
"473 139\n",
209+
"474 320\n",
210+
"475 383\n",
211+
"476 286\n",
212+
"477 244\n",
213+
"478 480\n",
214+
"479 431\n",
215+
"480 279\n",
216+
"481 198\n",
217+
"482 381\n",
218+
"483 463\n",
219+
"484 366\n",
220+
"485 439\n",
221+
"486 255\n",
222+
"487 401\n",
223+
"488 475\n",
224+
"489 257\n",
225+
"490 152\n",
226+
"491 235\n",
227+
"492 464\n",
228+
"493 253\n",
229+
"494 231\n",
230+
"495 427\n",
231+
"496 141\n",
232+
"497 186\n",
233+
"498 161\n",
234+
"499 413\n",
235+
"Length: 500, dtype: int64\n"
236+
]
237+
}
238+
],
239+
"source": [
240+
"#print(date_sorter())"
241+
]
242+
},
243+
{
244+
"cell_type": "code",
245+
"execution_count": null,
246+
"metadata": {
247+
"collapsed": true
248+
},
249+
"outputs": [],
250+
"source": []
251+
}
252+
],
253+
"metadata": {
254+
"coursera": {
255+
"course_slug": "python-text-mining",
256+
"graded_item_id": "LvcWI",
257+
"launcher_item_id": "krne9",
258+
"part_id": "Mkp1I"
259+
},
260+
"kernelspec": {
261+
"display_name": "Python 3",
262+
"language": "python",
263+
"name": "python3"
264+
},
265+
"language_info": {
266+
"codemirror_mode": {
267+
"name": "ipython",
268+
"version": 3
269+
},
270+
"file_extension": ".py",
271+
"mimetype": "text/x-python",
272+
"name": "python",
273+
"nbconvert_exporter": "python",
274+
"pygments_lexer": "ipython3",
275+
"version": "3.6.2"
276+
}
277+
},
278+
"nbformat": 4,
279+
"nbformat_minor": 2
280+
}

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /