1
\$\begingroup\$

I want to optimize the performance of this SQL query. If I populate this hashtable with one million keys the query will take around minute. How I can optimize this Java method for faster execution?

private HashMap<String, Boolean> selectedIds = new HashMap<>();
public void deleteSelectedIDs() throws SQLException {
 if (ds == null) {
 throw new SQLException();
 }
 Connection conn = ds.getConnection();
 if (conn == null) {
 throw new SQLException();
 }
 PreparedStatement ps = null;
 ResultSet resultSet = null;
 try {
 conn.setAutoCommit(false);
 boolean committed = false;
 try {
 String sqlDeleteQuery = "DELETE FROM ACTIVESESSIONSLOG WHERE ASESSIONID = ?";
 Set<String> keySet = selectedIds.keySet();
 String[] keys = new String[]{};
 keys = selectedIds.keySet().toArray(keys);
 ps = conn.prepareStatement(sqlDeleteQuery);
 for (int i = 0; i < keys.length; i++) {
 if (selectedIds.get(keys[i]).booleanValue()) {
 ps.setString(1, keys[i]);
 ps.executeUpdate();
 ps.clearParameters();
 selectedIds.put(keys[i], false); //get(keys[i]) = false;
 }
 }
 conn.commit();
 committed = true;
 //selectedIds.clear();
 } finally {
 if (!committed) {
 conn.rollback();
 }
 }
 } finally {
 ps.close();
 conn.close();
 }
 }
private HashMap<String, Boolean> selectedIds = new HashMap<>();
public void deleteSelectedIDs() throws SQLException {
 if (ds == null) {
 throw new SQLException();
 }
 Connection conn = ds.getConnection();
 if (conn == null) {
 throw new SQLException();
 }
 PreparedStatement ps = null;
 ResultSet resultSet = null;
 try {
 conn.setAutoCommit(false);
 boolean committed = false;
 try {
 String sqlDeleteQuery = "DELETE FROM ACTIVESESSIONSLOG WHERE ASESSIONID = ?";
 Set<String> keySet = selectedIds.keySet();
 String[] keys = new String[]{};
 keys = selectedIds.keySet().toArray(keys);
 ps = conn.prepareStatement(sqlDeleteQuery);
 for (int i = 0; i < keys.length; i++) {
 if (selectedIds.get(keys[i]).booleanValue()) {
 ps.setString(1, keys[i]);
 ps.executeUpdate();
 ps.clearParameters();
 selectedIds.put(keys[i], false); //get(keys[i]) = false;
 }
 }
 conn.commit();
 committed = true;
 //selectedIds.clear();
 } finally {
 if (!committed) {
 conn.rollback();
 }
 }
 } finally {
 ps.close();
 conn.close();
 }
 }
palacsint
30.3k9 gold badges81 silver badges157 bronze badges
asked May 24, 2012 at 16:41
\$\endgroup\$

2 Answers 2

3
\$\begingroup\$

Per the other example - step 1 should be to get out your profiler.

In any case - I suspect you would get significantly reduced round trip times by taking advantage of JDBC batching capability. Here's an example: http://www.tutorialspoint.com/jdbc/jdbc-batch-processing.htm (see the second one - using a prepared statement).

In short - you create your DELETE prepared statement, drop the values into a batch, and then execute the batch in one go. Use a prepared statement to reduce parse time for the server.

Depending on if you want to ramp up the complexity, you might want to split the rows into tasks and run them in parallel threads.

answered May 24, 2012 at 22:13
\$\endgroup\$
1
\$\begingroup\$

If you look at this in a profiler, I will bet that the most time is being used at this line:

ps.executeUpdate();

I would gather all strings to be deleted into one statement like:

DELETE FROM ACTIVESESSIONSLOG WHERE ASESSIONID IN ('A','B','C')

and send that 1 SQL statement instead of 1,000,000 statements.

Second:

instead of:

 Set<String> keySet = selectedIds.keySet();
 String[] keys = new String[]{};
 keys = selectedIds.keySet().toArray(keys);

I would use an iterator:

 Set<String> keySet = selectedIds.keySet();
 Iterator<String> iter = keySet.iterator();
 while(!iter.hasNext()) {
 String key = iter.next();
 //.. revise substituting key for keys[i]
 }

This avoids the need to copy all of those string references around. It might be worth it to simply operate on a Set of strings that only possesses members that are true if selectedIds is not used for other purposes.

palacsint
30.3k9 gold badges81 silver badges157 bronze badges
answered May 24, 2012 at 17:07
\$\endgroup\$
3
  • \$\begingroup\$ Thank you for the reply! The big problem is that I don't have exact number of the rows that I have to delete every time. Sometimes I might delete only 10 rows. How I can use one statement. \$\endgroup\$ Commented May 24, 2012 at 17:23
  • \$\begingroup\$ I tried to implement the iterator but I failed - something is missing. Would you show me the complete edited source code, please? \$\endgroup\$ Commented May 24, 2012 at 17:24
  • 2
    \$\begingroup\$ Some databases (like oracle) have a maximum query length and I am pretty sure that sending one delete with 1.000.000 values in the in statement will hit that. \$\endgroup\$ Commented May 25, 2012 at 16:50

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.