I am developing a reporting application in Flask where I pull data in a route function from a postgresql
database and jsonify the data. On the client side I have some javascript that does a $.getJSON
on the route to request the data.
Here is what the code looks like.
Flask route
def get_db():
if not hasattr(g, 'postgres_db'):
g.postgres_db = connect_db()
return g.postgres_db
@main.route('/Q5')
@login_required
def failed_access():
failed_query = """select * from prototype.Q5"""
db = get_db()
with db.cursor(cursor_factory=RealDictCursor) as cur:
cur.execute(failed_query)
results = cur.fetch_all()
data = json.dumps(results, indent=2, default=date_handler)
return data
Client side javascript looks like this:
$.getJSON('/Q5', function genChart() { blah() }));
Everything works fine but I want to make sure that there is something I am not missing or could be doing a better way.
3 Answers 3
That is not a lot of JavaScript to review ;)
You are not dealing with failure,
$.getJSON()
returns ajqXHR
object and you can/should define failure handling there.Similarly for the backend, it seems you are not handling any failures. That is okay for a prototype, not for production. (Perhaps you take care of exception handling elsewhere in Flask, it is not obvious in the documentation)
I don't really get the purpose of the get_db
function. Why not simply initialize the database at the very beginning like so:
db = connect_db()
And then use db
in failed_access
directly. It seems you wanted to do lazy initialization, but is it really worth it? Sooner or later, the database will be used anyway. Initializing upfront is straightforward, simple, and less error prone.
data = json.dumps(results, indent=2, default=date_handler)
Since the JSON data is only read by Javascript the indentation is not needed. It only consumes extra bandwidth. So you write better:
data = json.dumps(results, default=date_handler)